首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   855篇
  免费   82篇
  国内免费   2篇
电工技术   10篇
综合类   5篇
化学工业   203篇
金属工艺   17篇
机械仪表   18篇
建筑科学   25篇
矿业工程   1篇
能源动力   51篇
轻工业   140篇
水利工程   10篇
石油天然气   5篇
无线电   81篇
一般工业技术   175篇
冶金工业   50篇
原子能技术   4篇
自动化技术   144篇
  2024年   2篇
  2023年   17篇
  2022年   58篇
  2021年   90篇
  2020年   55篇
  2019年   38篇
  2018年   68篇
  2017年   41篇
  2016年   32篇
  2015年   27篇
  2014年   51篇
  2013年   69篇
  2012年   52篇
  2011年   42篇
  2010年   32篇
  2009年   27篇
  2008年   17篇
  2007年   20篇
  2006年   12篇
  2005年   14篇
  2004年   12篇
  2003年   14篇
  2002年   9篇
  2001年   7篇
  2000年   10篇
  1999年   9篇
  1998年   8篇
  1997年   8篇
  1996年   11篇
  1995年   4篇
  1994年   7篇
  1993年   6篇
  1992年   9篇
  1991年   3篇
  1990年   7篇
  1989年   4篇
  1988年   4篇
  1987年   2篇
  1986年   5篇
  1985年   6篇
  1984年   3篇
  1983年   5篇
  1981年   8篇
  1977年   4篇
  1976年   2篇
  1975年   1篇
  1974年   1篇
  1973年   2篇
  1971年   2篇
  1970年   1篇
排序方式: 共有939条查询结果,搜索用时 46 毫秒
21.
Poly(n‐butyl methacrylate) (PBMA) composites with calcium carbonate (CaCO3) were prepared by in situ radical copolymerization of butyl methacrylate (BMA) and methacrylic acid (MA) with precipitated calcium carbonate. To compare the different rheological behaviors of the monomer mixtures with CaCO3 and the composites, the steady and dynamic viscosities of BMA/MA/CaCO3 and poly(BMA/MA/CaCO3) were measured by means of steady and oscillatory shear flows. The viscosity of the mixture BMA/MA/CaCO3 was found to increase evidently with the increasing of CaCO3%. The influence of MA% on viscosity of BMA/MA/CaCO3 was slight. During the in situ polymerization, the viscosity of the reacting system was measured to be enhanced by a factor of about 104 from the monomer/CaCO3 mixture to composites. The dependency of zero‐shear viscosity on molar mass of PBMA was also investigated. The relation between the zero‐shear viscosity and molar mass is η0 = 10?15 Mw3.5. The evolution of the viscosity with the temperature for both PBMA and its composites was obtained and time–temperature superposition was used to build master curves for the dynamic moduli. The flow activation energies were found to be 115.0, 148.6, and 178.7 kJ/mol for PBMA, composite PBMA/CaCO3 (90/10), and PBMA/MA/CaCO3 (89/1/10), respectively. The viscosity of the composites containing less than 10% CaCO3 was lower than that of pure PBMA with the same molar mass. © 2003 Wiley Periodicals, Inc. J Appl Polym Sci 88: 1376–1383, 2003  相似文献   
22.
Styrene as a monomer was emulsified in water using several magnetite nanoparticles concentration and pH values. Emulsified styrene drops were used as templates for polymerization, in presence of water soluble free radical initiator, and formation of composite particles. Styrene template drops stabilization was verified by light as well as scanning electron microscopy imaging, which ensured the participation of the particles in building up a mechanical barrier to stop oil drops coalescence. Furthermore, the produced polystyrene composites were strongly attracted to an external magnet. The difference in particles size as a function of pH was elucidated using zeta potential measurements, which indicated dominance of pH on the hydrophilicity of the particles and consequently the extent of emulsification, which in turn affected the size of the obtained microspheres. Under some circumstances, capsules were formed instead of particles. Thereby, it can be concluded that the magnetic microspheres are optimally formed at pH 2.3 independently of the magnetite content used.  相似文献   
23.
This paper develops a comprehensive interpolation scheme for non-uniform rational B-spline (NURBS) curves, which does not only simultaneously meet the requirements of both constant feedrate and chord accuracy, but also real-time integrates machining dynamics in the interpolation stage. Although the existing work in this regard has realized the importance to simultaneously consider chord error and machining dynamics, none has really incorporated these in one complete interpolation scheme. In this paper, machining dynamics is considered for three aspects: sharp corners or feedrate sensitive corners on the curves, components with high frequencies or frequencies matching machine natural ones and high jerks. A look-ahead module was developed for detecting and adaptively adjusting the feedrate at the sharp corners. By Fast Fourier Transform (FFT) analysis with a moving window in the interpolation stage identified were some special frequency components such as those containing high frequencies or with frequencies matching machine natural ones. Then, the notch filtering or time spacing method was used to eliminate these components. To more completely reduce feedrate and acceleration fluctuations, the jerk-limited algorithm was also developed. Finally, the interpolated feedrate was further smoothened with B-spline fitting method and the NURBS curves were re-interpolated with the smoothened feedrate. During the interpolation, the chord error was repeatedly checked and confined in the prescribed tolerance. Two NURBS curves were used as examples to test the feasibility of the developed interpolation scheme.  相似文献   
24.
There is significant interest in the network management and industrial security community about the need to identify the “best” and most relevant features for network traffic in order to properly characterize user behaviour and predict future traffic. The ability to eliminate redundant features is an important Machine Learning (ML) task because it helps to identify the best features in order to improve the classification accuracy as well as to reduce the computational complexity related to the construction of the classifier. In practice, feature selection (FS) techniques can be used as a preprocessing step to eliminate irrelevant features and as a knowledge discovery tool to reveal the “best” features in many soft computing applications. In this paper, we investigate the advantages and disadvantages of such FS techniques with new proposed metrics (namely goodness, stability and similarity). We continue our efforts toward developing an integrated FS technique that is built on the key strengths of existing FS techniques. A novel way is proposed to identify efficiently and accurately the “best” features by first combining the results of some well-known FS techniques to find consistent features, and then use the proposed concept of support to select a smallest set of features and cover data optimality. The empirical study over ten high-dimensional network traffic data sets demonstrates significant gain in accuracy and improved run-time performance of a classifier compared to individual results produced by some well-known FS techniques.  相似文献   
25.
Neural Computing and Applications - A lot of different methods are being opted for improving the educational standards through monitoring of the classrooms. The developed world uses Smart...  相似文献   
26.

We perceive big data with massive datasets of complex and variegated structures in the modern era. Such attributes formulate hindrances while analyzing and storing the data to generate apt aftermaths. Privacy and security are the colossal perturb in the domain space of extensive data analysis. In this paper, our foremost priority is the computing technologies that focus on big data, IoT (Internet of Things), Cloud Computing, Blockchain, and fog computing. Among these, Cloud Computing follows the role of providing on-demand services to their customers by optimizing the cost factor. AWS, Azure, Google Cloud are the major cloud providers today. Fog computing offers new insights into the extension of cloud computing systems by procuring services to the edges of the network. In collaboration with multiple technologies, the Internet of Things takes this into effect, which solves the labyrinth of dealing with advanced services considering its significance in varied application domains. The Blockchain is a dataset that entertains many applications ranging from the fields of crypto-currency to smart contracts. The prospect of this research paper is to present the critical analysis and review it under the umbrella of existing extensive data systems. In this paper, we attend to critics' reviews and address the existing threats to the security of extensive data systems. Moreover, we scrutinize the security attacks on computing systems based upon Cloud, Blockchain, IoT, and fog. This paper lucidly illustrates the different threat behaviour and their impacts on complementary computational technologies. The authors have mooted a precise analysis of cloud-based technologies and discussed their defense mechanism and the security issues of mobile healthcare.

  相似文献   
27.
Scalability is one of the most important quality attribute of software-intensive systems, because it maintains an effective performance parallel to the large fluctuating and sometimes unpredictable workload. In order to achieve scalability, thread pool system (TPS) (which is also known as executor service) has been used extensively as a middleware service in software-intensive systems. TPS optimization is a challenging problem that determines the optimal size of thread pool dynamically on runtime. In case of distributed-TPS (DTPS), another issue is the load balancing b/w available set of TPSs running at backend servers. Existing DTPSs are overloaded either due to an inappropriate TPS optimization strategy at backend servers or improper load balancing scheme that cannot quickly recover an overload. Consequently, the performance of software-intensive system is suffered. Thus, in this paper, we propose a new DTPS that follows the collaborative round robin load balancing that has the effect of a double-edge sword. On the one hand, it effectively performs the load balancing (in case of overload situation) among available TPSs by a fast overload recovery procedure that decelerates the load on the overloaded TPSs up to their capacities and shifts the remaining load towards other gracefully running TPSs. And on the other hand, its robust load deceleration technique which is applied to an overloaded TPS sets an appropriate upper bound of thread pool size, because the pool size in each TPS is kept equal to the request rate on it, hence dynamically optimizes TPS. We evaluated the results of the proposed system against state of the art DTPSs by a client-server based simulator and found that our system outperformed by sustaining smaller response times.  相似文献   
28.
Reference range is a statistic that is used in health related fields to represent the range of the most likely values for a variable of interest. Based on this range, individuals are classified as being healthy or unhealthy. In biostatistics, the reference range is calculated as the (1 ? α)% prediction interval, where this prediction interval is based on the estimated population variance from the data. Such estimation of population variance is not precise, because obtained test results do usually have errors associated with them. These errors are due to the imprecise test procedure or gauge used. In this paper, the total variability in the data is decomposed into two categories. The first is the patient‐to‐patient variability and the other is the variability due to the measurement system used. Estimation of the two kinds is performed through a gauge repeatability and reproducibility study, then the reference range is calculated, taking into account only the patient‐to‐patient variability. The revised reference range procedure is illustrated through a case study of vitamin B12 test results. A closed form formula is given to calculate the probability of a given test result being within the revised reference range. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
29.
This paper presents a new individual based optimization algorithm, which is inspired from asexual reproduction known as a remarkable biological phenomenon, called as asexual reproduction optimization (ARO). ARO can be essentially considered as an evolutionary based algorithm that mathematically models the budding mechanism of asexual reproduction. In ARO, a parent produces a bud through a reproduction operator; thereafter the parent and its bud compete to survive according to a performance index obtained from the underlying objective function of the optimization problem; this leads to the fitter individual. ARO adaptive search ability along with its strength and weakness points are fully described in the paper. Furthermore, the ARO convergence to the global optimum is mathematically analyzed. To approve the effectiveness of the ARO performance, it is tested with several benchmark functions frequently used in the area of optimization. Finally, the ARO performance is statistically compared with that of an improved genetic algorithm (GA). Results of simulation illustrate that ARO remarkably outperforms GA.  相似文献   
30.
Multiple Sequences Alignment (MSA) of biological sequences is a fundamental problem in computational biology due to its critical significance in wide ranging applications including haplotype reconstruction, sequence homology, phylogenetic analysis, and prediction of evolutionary origins. The MSA problem is considered NP-hard and known heuristics for the problem do not scale well with increasing numbers of sequences. On the other hand, with the advent of a new breed of fast sequencing techniques it is now possible to generate thousands of sequences very quickly. For rapid sequence analysis, it is therefore desirable to develop fast MSA algorithms that scale well with an increase in the dataset size. In this paper, we present a novel domain decomposition based technique to solve the MSA problem on multiprocessing platforms. The domain decomposition based technique, in addition to yielding better quality, gives enormous advantages in terms of execution time and memory requirements. The proposed strategy allows one to decrease the time complexity of any known heuristic of O(N)xO(N)x complexity by a factor of O(1/p)xO(1/p)x, where NN is the number of sequences, xx depends on the underlying heuristic approach, and pp is the number of processing nodes. In particular, we propose a highly scalable algorithm, Sample-Align-D, for aligning biological sequences using Muscle system as the underlying heuristic. The proposed algorithm has been implemented on a cluster of workstations using the MPI library. Experimental results for different problem sizes are analyzed in terms of quality of alignment, execution time and speed-up.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号