首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   350篇
  免费   21篇
  国内免费   4篇
电工技术   3篇
综合类   1篇
化学工业   79篇
金属工艺   4篇
机械仪表   10篇
建筑科学   10篇
能源动力   13篇
轻工业   67篇
水利工程   10篇
无线电   26篇
一般工业技术   55篇
冶金工业   19篇
原子能技术   5篇
自动化技术   73篇
  2024年   1篇
  2023年   3篇
  2022年   27篇
  2021年   31篇
  2020年   22篇
  2019年   28篇
  2018年   21篇
  2017年   22篇
  2016年   29篇
  2015年   17篇
  2014年   16篇
  2013年   23篇
  2012年   13篇
  2011年   12篇
  2010年   5篇
  2009年   8篇
  2008年   13篇
  2007年   11篇
  2006年   2篇
  2005年   3篇
  2004年   5篇
  2003年   1篇
  2002年   2篇
  2001年   4篇
  2000年   3篇
  1999年   7篇
  1998年   5篇
  1997年   4篇
  1996年   3篇
  1995年   3篇
  1994年   4篇
  1993年   5篇
  1992年   2篇
  1991年   2篇
  1990年   3篇
  1989年   3篇
  1985年   3篇
  1983年   2篇
  1982年   2篇
  1980年   1篇
  1979年   2篇
  1977年   1篇
  1976年   1篇
排序方式: 共有375条查询结果,搜索用时 31 毫秒
1.
Ahmad  Bilal  Jian  Wang  Enam  Rabia Noor  Abbas  Ali 《Wireless Personal Communications》2021,118(2):1055-1073

As per the most recent literature, Orthogonal Frequency Division Multiplexing (OFDM), a multi access technique, is considered most suitable for the 3G, 4G and 5G techniques in high speed wireless communication. What made OFDM most popular is its ability to deliver high bandwidth efficiency and superior data rate. Besides it, high value of peak to average power ratio (PAPR) and Inter Carrier Interference (ICI) are the challenges to tackle down via appropriate mitigation scheme. As a research contribution in the present work, an improved self-cancellation (SC) technique is designed and simulated through Simulink to mitigate the effect of ICI. This novel proposed technique (Improved SC) is designed over discrete wavelet transform (DWT) based OFDM and compared with conventional SC scheme over different channel conditions i.e. AWGN and Rayleigh fading environments. It is found that proposed DWT-OFDM with Improved SC scheme outperforms conventional SC technique significantly, under both AWGN and Rayleigh channel conditions. Further, in order to justify the novelty in the research contribution, a Split-DWT based Simulink model for Improved SC scheme is investigated to analyse the BER performance. This Split-DWT based Simulink model presented here foretells the future research potential in wavelet hybridization of OFDM to side-line ICI effects more efficiently.

  相似文献   
2.
The case-based learning (CBL) approach has gained attention in medical education as an alternative to traditional learning methodology. However, current CBL systems do not facilitate and provide computer-based domain knowledge to medical students for solving real-world clinical cases during CBL practice. To automate CBL, clinical documents are beneficial for constructing domain knowledge. In the literature, most systems and methodologies require a knowledge engineer to construct machine-readable knowledge. Keeping in view these facts, we present a knowledge construction methodology (KCM-CD) to construct domain knowledge ontology (i.e., structured declarative knowledge) from unstructured text in a systematic way using artificial intelligence techniques, with minimum intervention from a knowledge engineer. To utilize the strength of humans and computers, and to realize the KCM-CD methodology, an interactive case-based learning system(iCBLS) was developed. Finally, the developed ontological model was evaluated to evaluate the quality of domain knowledge in terms of coherence measure. The results showed that the overall domain model has positive coherence values, indicating that all words in each branch of the domain ontology are correlated with each other and the quality of the developed model is acceptable.  相似文献   
3.
4.
Vicious codes, especially viruses, as a kind of impressive malware have caused many disasters and continue to exploit more vulnerabilities. These codes are injected inside benign programs in order to abuse their hosts and ease their propagation. The offsets of injected virus codes are unknown and their targets usually are latent until they are executed and activated, what in turn makes viruses very hard to detect. In this paper enriched control flow graph miner, ECFGM in short, is presented to detect infected files corrupted by unknown viruses. ECFGM uses enriched control flow graph model to represent the benign and vicious codes. This model has more information than traditional control flow graph (CFG) by utilizing statistical information of dependent assembly instructions and API calls. To the best of our knowledge, the presented approach in this paper, for the first time, can recognize the offset of infected code of unknown viruses in the victim files. The main contributions of this paper are two folds: first, the presented model is able to detect unknown vicious code using ECFG model with reasonable complexity and desirable accuracy. Second, our approach is resistant against metamorphic viruses which utilize dead code insertion, variable renaming and instruction reordering methods.  相似文献   
5.
Nowadays malware is one of the serious problems in the modern societies. Although the signature based malicious code detection is the standard technique in all commercial antivirus softwares, it can only achieve detection once the virus has already caused damage and it is registered. Therefore, it fails to detect new malwares (unknown malwares). Since most of malwares have similar behavior, a behavior based method can detect unknown malwares. The behavior of a program can be represented by a set of called API's (application programming interface). Therefore, a classifier can be employed to construct a learning model with a set of programs' API calls. Finally, an intelligent malware detection system is developed to detect unknown malwares automatically. On the other hand, we have an appealing representation model to visualize the executable files structure which is control flow graph (CFG). This model represents another semantic aspect of programs. This paper presents a robust semantic based method to detect unknown malwares based on combination of a visualize model (CFG) and called API's. The main contribution of this paper is extracting CFG from programs and combining it with extracted API calls to have more information about executable files. This new representation model is called API-CFG. In addition, to have fast learning and classification process, the control flow graphs are converted to a set of feature vectors by a nice trick. Our approach is capable of classifying unseen benign and malicious code with high accuracy. The results show a statistically significant improvement over n-grams based detection method.  相似文献   
6.
In this paper, a novel algorithm for image encryption based on hash function is proposed. In our algorithm, a 512-bit long external secret key is used as the input value of the salsa20 hash function. First of all, the hash function is modified to generate a key stream which is more suitable for image encryption. Then the final encryption key stream is produced by correlating the key stream and plaintext resulting in both key sensitivity and plaintext sensitivity. This scheme can achieve high sensitivity, high complexity, and high security through only two rounds of diffusion process. In the first round of diffusion process, an original image is partitioned horizontally to an array which consists of 1,024 sections of size 8 × 8. In the second round, the same operation is applied vertically to the transpose of the obtained array. The main idea of the algorithm is to use the average of image data for encryption. To encrypt each section, the average of other sections is employed. The algorithm uses different averages when encrypting different input images (even with the same sequence based on hash function). This, in turn, will significantly increase the resistance of the cryptosystem against known/chosen-plaintext and differential attacks. It is demonstrated that the 2D correlation coefficients (CC), peak signal-to-noise ratio (PSNR), encryption quality (EQ), entropy, mean absolute error (MAE) and decryption quality can satisfy security and performance requirements (CC <0.002177, PSNR <8.4642, EQ >204.8, entropy >7.9974 and MAE >79.35). The number of pixel change rate (NPCR) analysis has revealed that when only one pixel of the plain-image is modified, almost all of the cipher pixels will change (NPCR >99.6125 %) and the unified average changing intensity is high (UACI >33.458 %). Moreover, our proposed algorithm is very sensitive with respect to small changes (e.g., modification of only one bit) in the external secret key (NPCR >99.65 %, UACI >33.55 %). It is shown that this algorithm yields better security performance in comparison to the results obtained from other algorithms.  相似文献   
7.
Gelatin (Gel)-based pH- and thermal-responsive magnetic hydrogels (MH-1 and MH-2) were designed and developed as novel drug delivery systems (DDSs) for cancer chemo/hyperthermia therapy. For this goal, Gel was functionalized with methacrylic anhydride (GelMA), and then copolymerized with (2-dimethylaminoethyl) methacrylate (DMAEMA) monomer in the presence of methacrylate-end capped magnetic nanoparticles (MNPs) as well as triethylene glycol dimethacrylate (TEGDMA; as crosslinker). Afterward, a thiol-end capped poly(N-isopropylacrylamide) (PNIPAAm-SH) was synthesized through an atom transfer radical polymerization technique, and then attached onto the hydrogel through “thiol-ene” click grafting. The preliminary performances of developed MHs for chemo/hyperthermia therapy of human breast cancer was investigated through the loading of doxorubicin hydrochloride (Dox) as an anticancer agent followed by cytotoxicity measurement of drug-loaded DDSs using MTT assay by both chemo- and chemo/hyperthermia-therapies. Owing to porous morphologies of the fabricated magnetic hydrogels according to scanning electron microscopy images and strong physicochemical interactions (e.g., hydrogen bonding) the drug loading capacities of the MH-1 and MH-2 were obtained as 72 ± 1.4 and 77 ± 1.8, respectively. The DDSs exhibited acceptable pH- and thermal-triggered drug release behaviors. The MTT assay results revealed that the combination of hyperthermia therapy and chemotherapy has synergic effect on the anticancer activities of the developed DDSs.  相似文献   
8.
In this paper, we consider an identical parallel machine scheduling problem with release dates. The objective is to minimize the total weighted completion time. This problem is known to be strongly NP-hard. We propose some dominance properties and two lower bounds. We also present an efficient heuristic. A branch-and-bound algorithm, in which the heuristic, the lower bounds and the dominance properties are incorporated, is proposed and tested on a large set of randomly generated instances.  相似文献   
9.
In recent years, classification learning for data streams has become an important and active research topic. A major challenge posed by data streams is that their underlying concepts can change over time, which requires current classifiers to be revised accordingly and timely. To detect concept change, a common methodology is to observe the online classification accuracy. If accuracy drops below some threshold value, a concept change is deemed to have taken place. An implicit assumption behind this methodology is that any drop in classification accuracy can be interpreted as a symptom of concept change. Unfortunately however, this assumption is often violated in the real world where data streams carry noise that can also introduce a significant reduction in classification accuracy. To compound this problem, traditional noise cleansing methods are incompetent for data streams. Those methods normally need to scan data multiple times whereas learning for data streams can only afford one-pass scan because of data’s high speed and huge volume. Another open problem in data stream classification is how to deal with missing values. When new instances containing missing values arrive, how a learning model classifies them and how the learning model updates itself according to them is an issue whose solution is far from being explored. To solve these problems, this paper proposes a novel classification algorithm, flexible decision tree (FlexDT), which extends fuzzy logic to data stream classification. The advantages are three-fold. First, FlexDT offers a flexible structure to effectively and efficiently handle concept change. Second, FlexDT is robust to noise. Hence it can prevent noise from interfering with classification accuracy, and accuracy drop can be safely attributed to concept change. Third, it deals with missing values in an elegant way. Extensive evaluations are conducted to compare FlexDT with representative existing data stream classification algorithms using a large suite of data streams and various statistical tests. Experimental results suggest that FlexDT offers a significant benefit to data stream classification in real-world scenarios where concept change, noise and missing values coexist.  相似文献   
10.
Riad  Rabia  Ros  Frédéric  hajji  Mohamed El  Harba  Rachid 《Applied Intelligence》2022,52(10):11592-11605

Background removal of an identity (ID) picture consists in separating the foreground (face, body, hair and clothes) from the background of the image. It is a necessary groundwork for all modern identity documents that also has many benefits for improving ID security. State of the art image processing techniques encountered several segmentation issues and offer only partial solutions. It is due to the presence of erratic components like hairs, poor contrast, luminosity variation, shadow, color overlap between clothes and background. In this paper, a knowledge infused approach is proposed that hybridizes smart image processing tasks and prior knowledge. The research is based on a divide and conquer strategy aiming at simulating the sequential attention of human when performing a manual segmentation. Knowledge is infused by considering the spatial relation between anatomic elements of the ID image (face feature, forehead, body and hair) as well as their “signal properties”. The process consists in first determining a convex hull around the person’s body including all the foreground while keeping very close to the contour between the background and the foreground. Then, a body map generated from biometric analysis associated to an automatic grab cut process is applied to reach a finer segmentation. Finally, a heuristic-based post-processing step consisting in correcting potential hair and fine boundary issues leads to the final segmentation. Experimental results show that the newly proposed architecture achieves better performances than tested current state-of-the-art methodologies including active contours, generalist popular deep learning techniques, and also two other ones considered as the smartest for portrait segmentation. This new technology has been adopted by an international company as its industrial ID foreground solution.

  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号