首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   196篇
  免费   9篇
  国内免费   2篇
电工技术   3篇
综合类   1篇
化学工业   39篇
金属工艺   3篇
机械仪表   1篇
建筑科学   8篇
能源动力   6篇
轻工业   35篇
水利工程   6篇
无线电   16篇
一般工业技术   15篇
冶金工业   15篇
原子能技术   5篇
自动化技术   54篇
  2024年   1篇
  2023年   2篇
  2022年   4篇
  2021年   13篇
  2020年   8篇
  2019年   13篇
  2018年   6篇
  2017年   10篇
  2016年   20篇
  2015年   9篇
  2014年   7篇
  2013年   13篇
  2012年   10篇
  2011年   7篇
  2010年   1篇
  2009年   6篇
  2008年   6篇
  2007年   7篇
  2006年   1篇
  2005年   3篇
  2004年   5篇
  2003年   1篇
  2002年   1篇
  2001年   4篇
  2000年   3篇
  1999年   5篇
  1998年   5篇
  1997年   3篇
  1996年   3篇
  1995年   2篇
  1994年   4篇
  1993年   5篇
  1992年   2篇
  1991年   2篇
  1990年   3篇
  1989年   3篇
  1985年   3篇
  1983年   1篇
  1980年   1篇
  1979年   2篇
  1977年   1篇
  1976年   1篇
排序方式: 共有207条查询结果,搜索用时 31 毫秒
1.
The case-based learning (CBL) approach has gained attention in medical education as an alternative to traditional learning methodology. However, current CBL systems do not facilitate and provide computer-based domain knowledge to medical students for solving real-world clinical cases during CBL practice. To automate CBL, clinical documents are beneficial for constructing domain knowledge. In the literature, most systems and methodologies require a knowledge engineer to construct machine-readable knowledge. Keeping in view these facts, we present a knowledge construction methodology (KCM-CD) to construct domain knowledge ontology (i.e., structured declarative knowledge) from unstructured text in a systematic way using artificial intelligence techniques, with minimum intervention from a knowledge engineer. To utilize the strength of humans and computers, and to realize the KCM-CD methodology, an interactive case-based learning system(iCBLS) was developed. Finally, the developed ontological model was evaluated to evaluate the quality of domain knowledge in terms of coherence measure. The results showed that the overall domain model has positive coherence values, indicating that all words in each branch of the domain ontology are correlated with each other and the quality of the developed model is acceptable.  相似文献   
2.
3.
Vicious codes, especially viruses, as a kind of impressive malware have caused many disasters and continue to exploit more vulnerabilities. These codes are injected inside benign programs in order to abuse their hosts and ease their propagation. The offsets of injected virus codes are unknown and their targets usually are latent until they are executed and activated, what in turn makes viruses very hard to detect. In this paper enriched control flow graph miner, ECFGM in short, is presented to detect infected files corrupted by unknown viruses. ECFGM uses enriched control flow graph model to represent the benign and vicious codes. This model has more information than traditional control flow graph (CFG) by utilizing statistical information of dependent assembly instructions and API calls. To the best of our knowledge, the presented approach in this paper, for the first time, can recognize the offset of infected code of unknown viruses in the victim files. The main contributions of this paper are two folds: first, the presented model is able to detect unknown vicious code using ECFG model with reasonable complexity and desirable accuracy. Second, our approach is resistant against metamorphic viruses which utilize dead code insertion, variable renaming and instruction reordering methods.  相似文献   
4.
Nowadays malware is one of the serious problems in the modern societies. Although the signature based malicious code detection is the standard technique in all commercial antivirus softwares, it can only achieve detection once the virus has already caused damage and it is registered. Therefore, it fails to detect new malwares (unknown malwares). Since most of malwares have similar behavior, a behavior based method can detect unknown malwares. The behavior of a program can be represented by a set of called API's (application programming interface). Therefore, a classifier can be employed to construct a learning model with a set of programs' API calls. Finally, an intelligent malware detection system is developed to detect unknown malwares automatically. On the other hand, we have an appealing representation model to visualize the executable files structure which is control flow graph (CFG). This model represents another semantic aspect of programs. This paper presents a robust semantic based method to detect unknown malwares based on combination of a visualize model (CFG) and called API's. The main contribution of this paper is extracting CFG from programs and combining it with extracted API calls to have more information about executable files. This new representation model is called API-CFG. In addition, to have fast learning and classification process, the control flow graphs are converted to a set of feature vectors by a nice trick. Our approach is capable of classifying unseen benign and malicious code with high accuracy. The results show a statistically significant improvement over n-grams based detection method.  相似文献   
5.
The Naive Bayes classifier is a popular classification technique for data mining and machine learning. It has been shown to be very effective on a variety of data classification problems. However, the strong assumption that all attributes are conditionally independent given the class is often violated in real-world applications. Numerous methods have been proposed in order to improve the performance of the Naive Bayes classifier by alleviating the attribute independence assumption. However, violation of the independence assumption can increase the expected error. Another alternative is assigning the weights for attributes. In this paper, we propose a novel attribute weighted Naive Bayes classifier by considering weights to the conditional probabilities. An objective function is modeled and taken into account, which is based on the structure of the Naive Bayes classifier and the attribute weights. The optimal weights are determined by a local optimization method using the quasisecant method. In the proposed approach, the Naive Bayes classifier is taken as a starting point. We report the results of numerical experiments on several real-world data sets in binary classification, which show the efficiency of the proposed method.  相似文献   
6.
In this paper, a novel algorithm for image encryption based on hash function is proposed. In our algorithm, a 512-bit long external secret key is used as the input value of the salsa20 hash function. First of all, the hash function is modified to generate a key stream which is more suitable for image encryption. Then the final encryption key stream is produced by correlating the key stream and plaintext resulting in both key sensitivity and plaintext sensitivity. This scheme can achieve high sensitivity, high complexity, and high security through only two rounds of diffusion process. In the first round of diffusion process, an original image is partitioned horizontally to an array which consists of 1,024 sections of size 8 × 8. In the second round, the same operation is applied vertically to the transpose of the obtained array. The main idea of the algorithm is to use the average of image data for encryption. To encrypt each section, the average of other sections is employed. The algorithm uses different averages when encrypting different input images (even with the same sequence based on hash function). This, in turn, will significantly increase the resistance of the cryptosystem against known/chosen-plaintext and differential attacks. It is demonstrated that the 2D correlation coefficients (CC), peak signal-to-noise ratio (PSNR), encryption quality (EQ), entropy, mean absolute error (MAE) and decryption quality can satisfy security and performance requirements (CC <0.002177, PSNR <8.4642, EQ >204.8, entropy >7.9974 and MAE >79.35). The number of pixel change rate (NPCR) analysis has revealed that when only one pixel of the plain-image is modified, almost all of the cipher pixels will change (NPCR >99.6125 %) and the unified average changing intensity is high (UACI >33.458 %). Moreover, our proposed algorithm is very sensitive with respect to small changes (e.g., modification of only one bit) in the external secret key (NPCR >99.65 %, UACI >33.55 %). It is shown that this algorithm yields better security performance in comparison to the results obtained from other algorithms.  相似文献   
7.
Gelatin (Gel)-based pH- and thermal-responsive magnetic hydrogels (MH-1 and MH-2) were designed and developed as novel drug delivery systems (DDSs) for cancer chemo/hyperthermia therapy. For this goal, Gel was functionalized with methacrylic anhydride (GelMA), and then copolymerized with (2-dimethylaminoethyl) methacrylate (DMAEMA) monomer in the presence of methacrylate-end capped magnetic nanoparticles (MNPs) as well as triethylene glycol dimethacrylate (TEGDMA; as crosslinker). Afterward, a thiol-end capped poly(N-isopropylacrylamide) (PNIPAAm-SH) was synthesized through an atom transfer radical polymerization technique, and then attached onto the hydrogel through “thiol-ene” click grafting. The preliminary performances of developed MHs for chemo/hyperthermia therapy of human breast cancer was investigated through the loading of doxorubicin hydrochloride (Dox) as an anticancer agent followed by cytotoxicity measurement of drug-loaded DDSs using MTT assay by both chemo- and chemo/hyperthermia-therapies. Owing to porous morphologies of the fabricated magnetic hydrogels according to scanning electron microscopy images and strong physicochemical interactions (e.g., hydrogen bonding) the drug loading capacities of the MH-1 and MH-2 were obtained as 72 ± 1.4 and 77 ± 1.8, respectively. The DDSs exhibited acceptable pH- and thermal-triggered drug release behaviors. The MTT assay results revealed that the combination of hyperthermia therapy and chemotherapy has synergic effect on the anticancer activities of the developed DDSs.  相似文献   
8.
In recent years, classification learning for data streams has become an important and active research topic. A major challenge posed by data streams is that their underlying concepts can change over time, which requires current classifiers to be revised accordingly and timely. To detect concept change, a common methodology is to observe the online classification accuracy. If accuracy drops below some threshold value, a concept change is deemed to have taken place. An implicit assumption behind this methodology is that any drop in classification accuracy can be interpreted as a symptom of concept change. Unfortunately however, this assumption is often violated in the real world where data streams carry noise that can also introduce a significant reduction in classification accuracy. To compound this problem, traditional noise cleansing methods are incompetent for data streams. Those methods normally need to scan data multiple times whereas learning for data streams can only afford one-pass scan because of data’s high speed and huge volume. Another open problem in data stream classification is how to deal with missing values. When new instances containing missing values arrive, how a learning model classifies them and how the learning model updates itself according to them is an issue whose solution is far from being explored. To solve these problems, this paper proposes a novel classification algorithm, flexible decision tree (FlexDT), which extends fuzzy logic to data stream classification. The advantages are three-fold. First, FlexDT offers a flexible structure to effectively and efficiently handle concept change. Second, FlexDT is robust to noise. Hence it can prevent noise from interfering with classification accuracy, and accuracy drop can be safely attributed to concept change. Third, it deals with missing values in an elegant way. Extensive evaluations are conducted to compare FlexDT with representative existing data stream classification algorithms using a large suite of data streams and various statistical tests. Experimental results suggest that FlexDT offers a significant benefit to data stream classification in real-world scenarios where concept change, noise and missing values coexist.  相似文献   
9.
An analytical study is made of the free and forced convection boundary layer flow past a porous medium bounded by a semi-infinite vertical porous plate. Locally similar solutions are then obtained by a perturbation method for large suction. Solutions for the velocity and temperature distributions are shown graphically for various suction velocities and values of the driving parameter Gr/R, where Gr is the Grashof number and Re is the Reynolds number. The corresponding values of the skin friction coefficient and the Nusselt number are finally shown in tabular form.  相似文献   
10.
Butter, butterfat, and corn, coconut, rapeseed, and soybean oils were exposed to 500 ft-c of fluorescent light at varying time-temperature conditions. Oxidation rates were measured by the peroxide values. Vitamin A and β-carotene content of butterfat were estimated. The effect of wavelength on the relative rates of oxidation was determined. The light transmitting properties of the samples at 15 and 30 C over a spectral range of 380–750 nm were measured. It was observed that there was no increase in oxidation rate when the light was switched off. The stability of the oils as shown by the oxidation rates did not correlate well with the ratios of C18:2 to C18:1 or C18:3 to C18:2 nor with the degree of unsaturation. Increase in temperature alone had minimal effect; however, in the presence of light the rate of oxidation increased considerably with a corresponding decrease in the content of Vitamin A and β-carotene. β-Carotene provided strong protective properties. After the photobleaching of β-carotene in butterfat, there was a rapid increase in peroxide values. With coconut oil, the oxidation rate was greater at 15 C than at 30 C due to greater light absorption at 15 C over the entire spectrum. The rate of oxidation decreased at higher wavelengths, and this effect was more pronounced in the vegetable oils than in butterfat, where the β-carotene was considered to serve as a filter for light of low wavelength. Presented at the AOCS meeting, Dallas, April 1975.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号