首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   251篇
  免费   20篇
  国内免费   2篇
电工技术   8篇
综合类   1篇
化学工业   79篇
金属工艺   4篇
机械仪表   3篇
建筑科学   7篇
能源动力   5篇
轻工业   33篇
水利工程   2篇
无线电   23篇
一般工业技术   61篇
冶金工业   10篇
自动化技术   37篇
  2024年   3篇
  2023年   10篇
  2022年   22篇
  2021年   24篇
  2020年   19篇
  2019年   21篇
  2018年   26篇
  2017年   20篇
  2016年   16篇
  2015年   7篇
  2014年   8篇
  2013年   17篇
  2012年   12篇
  2011年   17篇
  2010年   10篇
  2009年   11篇
  2008年   11篇
  2007年   3篇
  2006年   6篇
  2005年   1篇
  2004年   3篇
  2003年   2篇
  2002年   2篇
  1999年   1篇
  1990年   1篇
排序方式: 共有273条查询结果,搜索用时 156 毫秒
201.

In this era of Internet, the exchange of data between the users and service providers has grown tremendously. Organizations in health, banking, social network, criminal and government sectors have been collecting and processing the individuals’ information for their gainful purpose. However, collecting and sharing of the individuals’ information which could be sensitive and confidential, for data mining may cause a breach in data privacy. In many applications, selective data collection of confidential and sensitive information of the users’ needs to be modified for preserving it from unauthorized access and disclosure. Many data mining techniques that include statistical, k-anonymity, cryptographic, perturbation and randomization methods, etc. have been evolved for protecting and preserving data privacy. These techniques have their own limitations, it may be the case that the privacy protection is adequate or computations complexities are high and expensive. To address the limitations of the above-mentioned techniques, a methodology comprising of encoding and randomization, is proposed to preserve privacy. This technique called as Randomized Encoding (RE) technique, in which encoding is performed with addition of random noise from a known distribution to the original data for perturbing the data before its release to the public domain. The core component of this technique is a novel primitive of using Randomized Encoding (RE) which is quite similar to the spirit of other cryptographic algorithms. The reconstruction of an approximation to the original data distribution is done from the perturbed data and used for data mining purposes. There is always a trade-off between information loss and privacy preservation. To achieve balance between privacy and data utility, the dataset attributes are first classified into sensitive and quasi-identifiers. The pre-classified confidential and sensitive data attributes are perturbed using Base 64 encoding with addition of a randomly generated noise for preserving privacy. In this variable dynamic proposed approach, the result analysis of the experiment conducted suggests that the proposed technique performs computationally efficient and preserves privacy while adequately maintaining data utility in comparison with other privacy preserving techniques such as anonymization approach.

  相似文献   
202.
Theory of relative defect proneness   总被引:1,自引:1,他引:0  
In this study, we investigated the functional form of the size-defect relationship for software modules through replicated studies conducted on ten open-source products. We consistently observed a power-law relationship where defect proneness increases at a slower rate compared to size. Therefore, smaller modules are proportionally more defect prone. We externally validated the application of our results for two commercial systems. Given limited and fixed resources for code inspections, there would be an impressive improvement in the cost-effectiveness, as much as 341% in one of the systems, if a smallest-first strategy were preferred over a largest-first one. The consistent results obtained in this study led us to state a theory of relative defect proneness (RDP): In large-scale software systems, smaller modules will be proportionally more defect-prone compared to larger ones. We suggest that practitioners consider our results and give higher priority to smaller modules in their focused quality assurance efforts.
Divya MathewEmail:

A. Güneş Koru   received a B.S. degree in Computer Engineering from Ege University, İzmir, Turkey in 1996, an M.S. degree in Computer Engineering from Dokuz Eylül University, İzmir, Turkey in 1998, an M.S. degree in Software Engineering from Southern Methodist University (SMU), Dallas, TX in 2002, and a Ph.D. degree in Computer Science from SMU in 2004. He is an assistant professor in the Department of Information Systems at University of Maryland, Baltimore County (UMBC). His research interests include software quality, measurement, maintenance, and evolution, open source software, bioinformatics, and healthcare informatics. Khaled El Emam   is an Associate Professor at the University of Ottawa, Faculty of Medicine and the School of Information Technology and Engineering. He is a Canada Research Chair in Electronic Health Information at the University of Ottawa. Previously Khaled was a Senior Research Officer at the National Research Council of Canada, and prior to that he was head of the Quantitative Methods Group at the Fraunhofer Institute in Kaiserslautern, Germany. In 2003 and 2004, he was ranked as the top systems and software engineering scholar worldwide by the Journal of Systems and Software based on his research on measurement and quality evaluation and improvement, and ranked second in 2002 and 2005. He holds a Ph.D. from the Department of Electrical and Electronics, King’s College, at the University of London (UK). His labs web site is: . Dongsong Zhang   is an Associate Professor in the Department of Information Systems at University of Maryland, Baltimore County. He received his Ph.D. in Management Information Systems from the University of Arizona. His current research interests include context-aware mobile computing, computer-mediated collaboration and communication, knowledge management, and open source software. Dr. Zhang’s work has been published or will appear in journals such as Communications of the ACM (CACM), Journal of Management Information Systems (JMIS), IEEE Transactions on Knowledge and Data Engineering (TKDE), IEEE Transactions on Multimedia, IEEE Transactions on Systems, Man, and Cybernetics, IEEE Transactions on Professional Communication, among others. He has received research grants and awards from NIH, Google Inc., and Chinese Academy of Sciences. He also serves as senior editor or editorial board member of a number of journals. Hongfang Liu   is currently an Assistant Professor in Department of Biostatistics, Bioinformatics, and Biomathematics (DBBB) of Georgetown University. She has been working in the field of Biomedical Informatics for more than 10 years. Her expertise in clinical informatics includes clinical information system, controlled medical vocabulary, and medical language processing. Her expertise in bioinformatics includes microarray data analysis, biomedical entity nomenclature, molecular biology database curation, ontology, and biological text mining. She received a B.S. degree in Applied Mathematics and Statistics from University of Science and Technology of China in 1994, a M.S. degree in Computer Science from Fordham University in 1998, a PhD degree in computer science at the Graduate School of City University of New York in 2002. Divya Mathew   received the BTech degree in computer science and engineering from Cochin University of Science and Technology in 2005 and the MS degree in information systems from the University of Maryland, Baltimore County in 2008. Her research interests include software engineering and privacy preserving data mining techniques.   相似文献   
203.
204.
Products of petroleum crude are multifluorophoric in nature due to the presence of a mixture of a variety polycyclic aromatic hydrocarbons (PAHs). The use of excitation-emission matrix fluorescence (EEMF) spectroscopy for the analysis of such multifluorophoric samples is gaining progressive acceptance. In this work, EEMF spectroscopic data is processed using chemometric multivariate methods to develop a reliable calibration model for the quantitative determination of kerosene fraction present in petrol. The application of the N-way partial least squares regression (N-PLS) method was found to be very efficient for the estimation of kerosene fraction. A very good degree of accuracy of prediction, expressed in terms of root mean square error of prediction (RMSEP), was achieved at a kerosene fraction of 2.05%.  相似文献   
205.
This paper describes an efficient method for the synthesis of long chain dialkyldiamido imidazolines by the reaction of diethylenetriamine and several fatty acids under non-solvent microwave irradiation using calcium oxide as support This synthesis required much less time in comparison to conventional thermal condensation and is carried out in an open vessel and the products obtained by this method were found to be in good yields and of high purity. Fatty imidazolines were then quaternized by using dimethyl sulfate as a quaternizing agent and isopropanol as a solvent, to produce cationic imidazolinium salts which were evaluated for yield and cationic content. The instrumental techniques, viz. FT-IR and 1H NMR verified the formation of imidazolines and its subsequent quaternization. This method produced imidazolines in the very low time of 5–10 min and gave a yield of 89–91% as compared to a very long time of 8–10 h and a lower yield of 75–80% by the conventional thermal condensation method.
Divya BajpaiEmail: Email:
  相似文献   
206.
This paper describes the synthesis of long‐chain dialkylamido imidazolines based on tallow fatty acids and diethylenetriamine, followed by their quaternization. Imidazolines were obtained by non‐solvent microwave synthesis using calcium oxide as support, which were then quaternized by using dimethyl sulfate as a quaternizing agent and iso‐propanol as a solvent, to produce cationic imidazolinium salts. The synthesized cationic imidazoline surfactants were evaluated for yield and cationic content. The instrumental techniques, viz. FT‐IR and 1H‐NMR, verified the formation of imidazolines and their subsequent quaternization. The surface‐active and performance properties of the cationic imidazolines in terms of critical micelle concentration, surface tension, dispersibility, emulsion stability, softening, rewettability and antistatic properties were also reported.  相似文献   
207.
Journal of Applied Electrochemistry - Significant advancement in photoelectrochemical water splitting current is observed using uniquely evolved n/n junction bilayered nano-hetero-structured thin...  相似文献   
208.
This paper presents the investigations of crosstalk effects in ternary logic-based coupled interconnects. The crosstalk analysis is investigated for coupled copper interconnects and copper-multilayer graphene (Cu-MLG) interconnects. In Cu-MLG interconnects, the Cu interconnect is enclosed with MLG barrier and standard ternary inverter is used to drive the interconnect. Based on the industry standard HSPICE simulation results, the crosstalk effects such as noise peak and delay are lower compared with conventional Cu interconnects. Moreover, the Cu-MLG interconnects show reduced power dissipation, power delay product (PDP), and energy delay product (EDP) over the Cu interconnects. From the simulation results, it is observed that the Cu-MLG interconnects provides the performance improvements up to 30.67% compared with the Cu interconnects. Thus, the Cu-MLG interconnects are more compatible for ternary logic integrated circuits compared with traditional Cu interconnects.  相似文献   
209.
Austenite reversion and its thermal stability attained during the transformation is key to enhanced toughness and blast resistance in transformation-induced-plasticity martensitic steels. We demonstrate that the thermal stability of Ni-stabilized austenite and kinetics of the transformation can be controlled by forming Ni-rich regions in proximity of pre-existing (retained) austenite. Atom probe tomography (APT) in conjunction with thermodynamic and kinetic modeling elucidates the role of Ni-rich regions in enhancing growth kinetics of thermally stable austenite, formed utilizing a multistep intercritical (Quench-Lamellarization-Tempering (QLT)-type) heat treatment for a low-carbon 10 wt pct Ni steel. Direct evidence of austenite formation is provided by dilatometry, and the volume fraction is quantified by synchrotron X-ray diffraction. The results indicate the growth of nm-thick austenite layers during the second intercritical tempering treatment (T-step) at 863 K (590 °C), with austenite retained from first intercritical treatment (L-step) at 923 K (650 °C) acting as a nucleation template. For the first time, the thermal stability of austenite is quantified with respect to its compositional evolution during the multistep intercritical treatment of these steels. Austenite compositions measured by APT are used in combination with the thermodynamic and kinetic approach formulated by Ghosh and Olson to assess thermal stability and predict the martensite-start temperature. This approach is particularly useful as empirical relations cannot be extrapolated for the highly Ni-enriched austenite investigated in the present study.  相似文献   
210.
This work addresses the rolling element bearing (REB) fault classification problem by tackling the issue of identifying the appropriate parameters for the extreme learning machine (ELM) and enhancing its effectiveness. This study introduces a memetic algorithm (MA) to identify the optimal ELM parameter set for compact ELM architecture alongside better ELM performance. The goal of using MA is to investigate the promising solution space and systematically exploit the facts in the viable solution space. In the proposed method, the local search method is proposed along with link-based and node-based genetic operators to provide a tight ELM structure. A vibration data set simulated from the bearing of rotating machinery has been used to assess the performance of the optimized ELM with the REB fault categorization problem. The complexity involved in choosing a promising feature set is eliminated because the vibration data has been transformed into kurtograms to reflect the input of the model. The experimental results demonstrate that MA efficiently optimizes the ELM to improve the fault classification accuracy by around 99.0% and reduces the requirement of hidden nodes by 17.0% for both data sets. As a result, the proposed scheme is demonstrated to be a practically acceptable and well-organized solution that offers a compact ELM architecture in comparison to the state-of-the-art methods for the fault classification problem.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号