首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4061篇
  免费   268篇
  国内免费   26篇
电工技术   67篇
综合类   8篇
化学工业   1108篇
金属工艺   101篇
机械仪表   128篇
建筑科学   108篇
矿业工程   8篇
能源动力   285篇
轻工业   364篇
水利工程   43篇
石油天然气   71篇
无线电   427篇
一般工业技术   730篇
冶金工业   207篇
原子能技术   40篇
自动化技术   660篇
  2024年   6篇
  2023年   55篇
  2022年   160篇
  2021年   212篇
  2020年   175篇
  2019年   218篇
  2018年   261篇
  2017年   212篇
  2016年   248篇
  2015年   153篇
  2014年   215篇
  2013年   414篇
  2012年   225篇
  2011年   263篇
  2010年   192篇
  2009年   168篇
  2008年   137篇
  2007年   94篇
  2006年   102篇
  2005年   64篇
  2004年   59篇
  2003年   49篇
  2002年   57篇
  2001年   34篇
  2000年   36篇
  1999年   20篇
  1998年   77篇
  1997年   47篇
  1996年   47篇
  1995年   30篇
  1994年   34篇
  1993年   37篇
  1992年   28篇
  1991年   21篇
  1990年   19篇
  1989年   21篇
  1988年   18篇
  1987年   15篇
  1986年   8篇
  1985年   14篇
  1984年   13篇
  1983年   11篇
  1982年   8篇
  1981年   10篇
  1980年   10篇
  1979年   8篇
  1978年   10篇
  1977年   8篇
  1976年   13篇
  1971年   4篇
排序方式: 共有4355条查询结果,搜索用时 15 毫秒
91.
In smart environments, pervasive computing contributes in improving daily life activities for dependent people by providing personalized services. Nevertheless, those environments do not guarantee a satisfactory level for protecting the user privacy and ensuring the trust between communicating entities. In this study, we propose a trust evaluation model based on user past and present behavior. This model is associated with a lightweight authentication key agreement protocol (Elliptic Curve-based Simple Authentication Key Agreement). The aim is to enable the communicating entities to establish a level of trust and then succeed in a mutual authentication using a scheme suitable for low-resource devices in smart environments. An innovation in our trust model is that it uses an accurate approach to calculate trust in different situations and includes a human-based feature for trust feedback, which is user rating. Finally, we tested and implemented our scheme on Android mobile phones in a smart environment dedicated for handicapped people.  相似文献   
92.
Duality properties have been investigated by many researchers in the recent literature. They are introduced in this paper for a fully fuzzified version of the minimal cost flow problem, which is a basic model in network flow theory. This model illustrates the least cost of the shipment of a commodity through a capacitated network in terms of the imprecisely known available supplies at certain nodes which should be transmitted to fulfil uncertain demands at other nodes. First, we review on the most valuable results on fuzzy duality concepts to facilitate the discussion of this paper. By applying Hukuhara’s difference, approximated and exact multiplication and Wu’s scalar production, we exhibit the flow in network models. Then, we use combinatorial algorithms on a reduced problem which is derived from fully fuzzified MCFP to acquire fuzzy optimal flows. To give duality theorems, we utilize a total order on fuzzy numbers due to the level of risk and realize optimality conditions for providing some efficient combinatorial algorithms. Finally, we compare our results with the previous worthwhile works to demonstrate the efficiency and power of our scheme and the reasonability of our solutions in actual decision-making problems.  相似文献   
93.
94.
In recent years, classification learning for data streams has become an important and active research topic. A major challenge posed by data streams is that their underlying concepts can change over time, which requires current classifiers to be revised accordingly and timely. To detect concept change, a common methodology is to observe the online classification accuracy. If accuracy drops below some threshold value, a concept change is deemed to have taken place. An implicit assumption behind this methodology is that any drop in classification accuracy can be interpreted as a symptom of concept change. Unfortunately however, this assumption is often violated in the real world where data streams carry noise that can also introduce a significant reduction in classification accuracy. To compound this problem, traditional noise cleansing methods are incompetent for data streams. Those methods normally need to scan data multiple times whereas learning for data streams can only afford one-pass scan because of data’s high speed and huge volume. Another open problem in data stream classification is how to deal with missing values. When new instances containing missing values arrive, how a learning model classifies them and how the learning model updates itself according to them is an issue whose solution is far from being explored. To solve these problems, this paper proposes a novel classification algorithm, flexible decision tree (FlexDT), which extends fuzzy logic to data stream classification. The advantages are three-fold. First, FlexDT offers a flexible structure to effectively and efficiently handle concept change. Second, FlexDT is robust to noise. Hence it can prevent noise from interfering with classification accuracy, and accuracy drop can be safely attributed to concept change. Third, it deals with missing values in an elegant way. Extensive evaluations are conducted to compare FlexDT with representative existing data stream classification algorithms using a large suite of data streams and various statistical tests. Experimental results suggest that FlexDT offers a significant benefit to data stream classification in real-world scenarios where concept change, noise and missing values coexist.  相似文献   
95.
The aim of this paper is to deal with an output controllability problem. It consists in driving the state of a distributed parabolic system toward a state between two prescribed functions on a boundary subregion of the system evolution domain with minimum energy control. Two necessary conditions are derived. The first one is formulated in terms of subdifferential associated with a minimized functional. The second one is formulated as a system of equations for arguments of the Lagrange systems. Numerical illustrations show the efficiency of the second approach and lead to some conjectures. Recommended by Editorial Board member Fumitoshi Matsuno under the direction of Editor Jae Weon Choi. Zerrik El Hassan is a Professor at the university Moulay Ismail of Meknes in Morocco. He was an Assistant Professor in the faculty of sciences of Meknes and researcher at the university of Perpignan (France). He got his doctorat d etat in system regional analysis (1993) at the University Mohammed V of Rabat, Morocco. Professor Zerrik wrote many papers and books in the area of systems analysis and control. Now he is the Head of the research team MACS (Modeling Analysis and Control of Systems) at the university Moulay Ismail of Meknes in Morocco. Ghafrani Fatima is a Researcher at team MACS at the University Moulay Ismail of Meknes in Morocco. She wrote many papers in the area of systems analysis and control.  相似文献   
96.

Background

The use of crowdsourcing in a pedagogically supported form to partner with learners in developing novel content is emerging as a viable approach for engaging students in higher-order learning at scale. However, how students behave in this form of crowdsourcing, referred to as learnersourcing, is still insufficiently explored.

Objectives

To contribute to filling this gap, this study explores how students engage with learnersourcing tasks across a range of course and assessment designs.

Methods

We conducted an exploratory study on trace data of 1279 students across three courses, originating from the use of a learnersourcing environment under different assessment designs. We employed a new methodology from the learning analytics (LA) field that aims to represent students' behaviour through two theoretically-derived latent constructs: learning tactics and the learning strategies built upon them.

Results

The study's results demonstrate students use different tactics and strategies, highlight the association of learnersourcing contexts with the identified learning tactics and strategies, indicate a significant association between the strategies and performance and contribute to the employed method's generalisability by applying it to a new context.

Implications

This study provides an example of how learning analytics methods can be employed towards the development of effective learnersourcing systems and, more broadly, technological educational solutions that support learner-centred and data-driven learning at scale. Findings should inform best practices for integrating learnersourcing activities into course design and shed light on the relevance of tactics and strategies to support teachers in making informed pedagogical decisions.  相似文献   
97.

In this article, we will present a new set of hybrid polynomials and their corresponding moments, with a view to using them for the localization, compression and reconstruction of 2D and 3D images. These polynomials are formed from the Hahn and Krawtchouk polynomials. The process of calculating these is successfully stabilized using the modified recurrence relations with respect to the n order, the variable x and the symmetry property. The hybrid polynomial generation process is carried out in two forms: the first form contains the separable discrete orthogonal polynomials of Krawtchouk–Hahn (DKHP) and Hahn–Krawtchouk (DHKP). The latter are generated as the product of the discrete orthogonal Hahn and Krawtchouk polynomials, while the second form is the square equivalent of the first form, it consists of discrete squared Krawtchouk–Hahn polynomials (SKHP) and discrete polynomials of Hahn–Krawtchouk squared (SHKP). The experimental results clearly show the efficiency of hybrid moments based on hybrid polynomials in terms of localization property and computation time of 2D and 3D images compared to other types of moments; on the other hand, encouraging results have also been shown in terms of reconstruction quality and compression despite the superiority of classical polynomials.

  相似文献   
98.

The edge computing model offers an ultimate platform to support scientific and real-time workflow-based applications over the edge of the network. However, scientific workflow scheduling and execution still facing challenges such as response time management and latency time. This leads to deal with the acquisition delay of servers, deployed at the edge of a network and reduces the overall completion time of workflow. Previous studies show that existing scheduling methods consider the static performance of the server and ignore the impact of resource acquisition delay when scheduling workflow tasks. Our proposed method presented a meta-heuristic algorithm to schedule the scientific workflow and minimize the overall completion time by properly managing the acquisition and transmission delays. We carry out extensive experiments and evaluations based on commercial clouds and various scientific workflow templates. The proposed method has approximately 7.7% better performance than the baseline algorithms, particularly in overall deadline constraint that gives a success rate.

  相似文献   
99.
The Journal of Supercomputing - This paper designs and develops a computational intelligence-based framework using convolutional neural network (CNN) and genetic algorithm (GA) to detect COVID-19...  相似文献   
100.
Data available in software engineering for many applications contains variability and it is not possible to say which variable helps in the process of the prediction. Most of the work present in software defect prediction is focused on the selection of best prediction techniques. For this purpose, deep learning and ensemble models have shown promising results. In contrast, there are very few researches that deals with cleaning the training data and selection of best parameter values from the data. Sometimes data available for training the models have high variability and this variability may cause a decrease in model accuracy. To deal with this problem we used the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) for selection of the best variables to train the model. A simple ANN model with one input, one output and two hidden layers was used for the training instead of a very deep and complex model. AIC and BIC values are calculated and combination for minimum AIC and BIC values to be selected for the best model. At first, variables were narrowed down to a smaller number using correlation values. Then subsets for all the possible variable combinations were formed. In the end, an artificial neural network (ANN) model was trained for each subset and the best model was selected on the basis of the smallest AIC and BIC value. It was found that combination of only two variables’ ns and entropy are best for software defect prediction as it gives minimum AIC and BIC values. While, nm and npt is the worst combination and gives maximum AIC and BIC values.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号