首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   252篇
  免费   57篇
  国内免费   64篇
电工技术   5篇
综合类   20篇
化学工业   2篇
机械仪表   6篇
能源动力   1篇
轻工业   2篇
水利工程   2篇
石油天然气   1篇
无线电   60篇
一般工业技术   13篇
自动化技术   261篇
  2024年   3篇
  2023年   11篇
  2022年   26篇
  2021年   23篇
  2020年   15篇
  2019年   12篇
  2018年   10篇
  2017年   19篇
  2016年   12篇
  2015年   19篇
  2014年   15篇
  2013年   21篇
  2012年   23篇
  2011年   24篇
  2010年   13篇
  2009年   16篇
  2008年   18篇
  2007年   7篇
  2006年   17篇
  2005年   9篇
  2004年   4篇
  2003年   7篇
  2002年   4篇
  2001年   5篇
  2000年   2篇
  1999年   1篇
  1998年   3篇
  1997年   1篇
  1996年   5篇
  1995年   3篇
  1994年   3篇
  1993年   3篇
  1992年   2篇
  1991年   2篇
  1989年   1篇
  1985年   3篇
  1984年   1篇
  1982年   1篇
  1981年   1篇
  1980年   1篇
  1979年   1篇
  1978年   2篇
  1977年   1篇
  1976年   1篇
  1975年   2篇
排序方式: 共有373条查询结果,搜索用时 15 毫秒
1.
2.
In many-task computing (MTC), applications such as scientific workflows or parameter sweeps communicate via intermediate files; application performance strongly depends on the file system in use. The state of the art uses runtime systems providing in-memory file storage that is designed for data locality: files are placed on those nodes that write or read them. With data locality, however, task distribution conflicts with data distribution, leading to application slowdown, and worse, to prohibitive storage imbalance. To overcome these limitations, we present MemFS, a fully symmetrical, in-memory runtime file system that stripes files across all compute nodes, based on a distributed hash function. Our cluster experiments with Montage and BLAST workflows, using up to 512 cores, show that MemFS has both better performance and better scalability than the state-of-the-art, locality-based file system, AMFS. Furthermore, our evaluation on a public commercial cloud validates our cluster results. On this platform MemFS shows excellent scalability up to 1024 cores and is able to saturate the 10G Ethernet bandwidth when running BLAST and Montage.  相似文献   
3.
为了增强哈希序列对各种几何变换攻击的鲁棒性,设计基于四元极谐变换矩与显著特征的图像鲁棒哈希算法。引入线性插值与自适应Wiener滤波器,实现初始图像的预处理;计算预处理图像的颜色向量角度,并基于Fourier变换,得到其对应的幅度信息,以获取两个不同的频谱;计算两个频谱的残差,确定图像中的局部显著性区域;通过LBP算子,提取显著特征;基于四元极谐变换(Quaternion Polar Harmonic Transform,QPHT),获取预处理图像的QPHT矩;联合显著特征与QPHT矩,形成过渡哈希数组。引入Logistic映射,定义加密函数,实现对过渡哈希数组的加密,输出最终的哈希序列,以增强其抗碰撞性能。测量源图像与可疑图像之间的哈希序列所对应的l2范数距离,并将其与优化阈值比较,对图像的真实性做出判断。在多种几何变换攻击下完成测试,输出数据显示:较当前准确性较高的哈希方法而言,该算法具有更理想的鲁棒性与识别准确率。  相似文献   
4.
一种基于SIFT的图像哈希算法   总被引:6,自引:1,他引:5  
图像哈希是近年来颇受关注的一个研究热点,但现有的图像哈希算法普遍存在对几何攻击(主要包括尺度、旋转、剪切)鲁棒性不足的缺点,不能满足很多实际应用的需要.针对上述问题,提出了对几何攻击具有强鲁棒性的SIH图像哈希算法.本算法基于在图像匹配等领域得到广泛应用的SIFT算子,通过对SIFT特征向量进行有针对性的筛选和压缩、基于特征向量分布质心的量化生成图像摘要.为适应图像摘要构造的特性,设计了基于广义集合距的匹配算法来衡量图像摘要间的距离.在公开图像库上的实验结果表明,本算法对几何攻击和非几何攻击的鲁棒性均优于对比算法,可广泛服务于图像识别/认证类型的应用.  相似文献   
5.
电网公司的用电信息采集系统具有数据量大、在线终端多、通道类型多和应用场景复杂等特点,传统的负载均衡的方式,在增加前置机或前置机故障时,终端协议处理都会发生大规模的迁移。文章提出了采用硬件负载均衡器实现通信负载均衡,采用一致哈希算法实现采集前置机的负载均衡;对哈希算法进行了分析比较,选择CRC32查表法,保证了哈希算法的高效、单调和均匀分布,在前置机发生变化时,仅有少数终端发生迁移,不发生终端大规模的迁移,确保负载均衡,保证了系统的稳定性和可靠性。  相似文献   
6.
A better similarity index structure for high-dimensional feature datapoints is very desirable for building scalable content-based search systems on feature-rich dataset. In this paper, we introduce sparse principal component analysis (Sparse PCA) and Boosting Similarity Sensitive Hashing (Boosting SSC) into traditional spectral hashing for both effective and data-aware binary coding for real data. We call this Sparse Spectral Hashing (SSH). SSH formulates the problem of binary coding as a thresholding a subset of eigenvectors of the Laplacian graph by constraining the number of nonzero features. The convex relaxation and eigenfunction learning are conducted in SSH to make the coding globally optimal and effective to datapoints outside the training data. The comparisons in terms of F1 score and AUC show that SSH outperforms other methods substantially over both image and text datasets.  相似文献   
7.
针对纹理防伪标签在光线不足、视力不佳的情况下难以鉴别的问题,提出了一种基于DCT、感知哈希和视觉特征向量的纹理防伪标签鉴别算法.首先通过DCT和感知哈希提取图像的特征向量;然后建立特征数据库,通过特征向量之间的归一化相关系数(NC)实现纹理防伪标签的自动鉴别.实验结果表明,该方法可以有效抵抗常规攻击和几何攻击,可实现自动鉴别技术.与现有的纹理防伪技术相比,该算法可以减小占据的数据库存储空间,具有更好的实用性.  相似文献   
8.
The “hash–sign–switch” paradigm was firstly proposed by Shamir and Tauman with the aim to design an efficient on-line/off-line signature scheme. Nonetheless, all existing on-line/off-line signature schemes based on this paradigm suffer from the key exposure problem of chameleon hashing. To avoid this problem, the signer should pre-compute and store a plenty of different chameleon hash values and the corresponding signatures on the hash values in the off-line phase, and send the collision and the signature for a certain hash value in the on-line phase. Hence, the computation and storage cost for the off-line phase and the communication cost for the on-line phase in Shamir–Tauman’s signature scheme are still a little more overload. In this paper, we first introduce a special double-trapdoor hash family based on the discrete logarithm assumption and then incorporate it to construct a more efficient generic on-line/off-line signature scheme without key exposure. Furthermore, we also present the first key-exposure-free generic on-line/off-line threshold signature scheme without a trusted dealer. Additionally, we prove that the proposed schemes have achieved the desired security requirements.  相似文献   
9.
Memory-efficient algorithms for the verification of temporal properties   总被引:14,自引:0,他引:14  
This article addresses the problem of designing memory-efficient algorithms for the verification of temporal properties of finite-state programs. Both the programs and their desired temporal properties are modeled as automata on infinite words (Büchi automata). Verification is then reduced to checking the emptiness of the automaton resulting from the product of the program and the property. This problem is usually solved by computing the strongly connected components of the graph representing the product automaton. Here, we present algorithms that solve the emptiness problem without explicitly constructing the strongly connected components of the product graph. By allowing the algorithms to err with some probability, we can implement them with a randomly accessed memory of size O(n) bits, where n is the number of states of the graph, instead of O(n log n) bits that the presently known algorithms require.  相似文献   
10.
The paper presents a family of distributed file structures, coined DiFS, for record structured, disk resident files with key based exact or interval match access. The file is organized into buckets that are spread among multiple servers, where a server may hold several buckets. Client requests are serviced by mapping keys onto buckets and looking up the corresponding server in an address table. Dynamic growth, in terms of file size and access load, is supported by bucket splits and bucket migrations onto the existing or newly created servers.The major problem that we are addressing is achieving scalability in the sense that both the file size and the client throughput can be scaled up by linearly increasing the number of servers and dynamically redistributing the data. Unlike previous work with similar objectives, our data redistribution considers explicitly the cost/performance ratio of the system by aiming to minimize the number of servers that are used to provide the required performance. A new server is added only if the overall server load in the system does not drop below a pre-specified threshold. Simulation results demonstrate the scalability with controlled cost/performance and the importance of global load control. The impact of various tuning parameters on the effectiveness of the load control is studied in detail. Finally, we compare our approach with other approaches known to date and demonstrate that each of the previous approaches can be recast as a special case of our model. Recommended by: Mei HsuThis material is based in part upon work supported by a grant from Hewlett-Packard Corporation and by NSF under grant IRI-9221947.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号