首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   170篇
  免费   13篇
电工技术   7篇
化学工业   37篇
机械仪表   4篇
建筑科学   8篇
能源动力   1篇
轻工业   12篇
水利工程   1篇
无线电   15篇
一般工业技术   35篇
冶金工业   8篇
自动化技术   55篇
  2022年   3篇
  2021年   7篇
  2020年   3篇
  2019年   7篇
  2018年   6篇
  2017年   4篇
  2016年   6篇
  2015年   9篇
  2014年   11篇
  2013年   17篇
  2012年   16篇
  2011年   12篇
  2010年   3篇
  2009年   4篇
  2008年   11篇
  2007年   7篇
  2006年   10篇
  2005年   8篇
  2004年   3篇
  2003年   1篇
  2002年   5篇
  2001年   1篇
  2000年   4篇
  1999年   2篇
  1998年   2篇
  1997年   3篇
  1996年   1篇
  1995年   3篇
  1994年   1篇
  1993年   2篇
  1990年   1篇
  1989年   2篇
  1983年   1篇
  1981年   1篇
  1980年   2篇
  1978年   2篇
  1977年   1篇
  1974年   1篇
排序方式: 共有183条查询结果,搜索用时 15 毫秒
71.
Stereoscopic displays have a promising future because of recent advancements and popularity of handheld devices and maturing head mounted displays. Gesture interaction such as pointing, selection, pinching, and manipulation are now possible in the current virtual environments, where accurate distance judgment is required. In this paper, we address the perception of exocentric distance in stereoscopic displays under two target orientations: horizontal and vertical. Three parallax conditions (on screen, 5 cm from screen, and 10 cm from screen) were considered, where the screen was fixed at a distance of 100 cm from the observer. Four levels of center‐to‐center distance between 10 and 50 cm were employed. The perceptual matching task revealed underestimation in all conditions. The overall judgment of exocentric distance was only about 80% of the actual. We also found a main effect of distance and interaction between layout and distance to be significant. The two important findings of this study are that underestimation of exocentric distance increases as the separation between virtual targets increases and that in vertical orientation, accuracy increases with closer targets. However, the main effects of layout and parallax on accuracy of judgment were not significant. Engineering implications of the results are also discussed in this paper.  相似文献   
72.
73.
Hierarchical co-clustering: off-line and incremental approaches   总被引:1,自引:0,他引:1  
Clustering data is challenging especially for two reasons. The dimensionality of the data is often very high which makes the cluster interpretation hard. Moreover, with high-dimensional data the classic metrics fail in identifying the real similarities between objects. The second challenge is the evolving nature of the observed phenomena which makes the datasets accumulating over time. In this paper we show how we propose to solve these problems. To tackle the high-dimensionality problem, we propose to apply a co-clustering approach on the dataset that stores the occurrence of features in the observed objects. Co-clustering computes a partition of objects and a partition of features simultaneously. The novelty of our co-clustering solution is that it arranges the clusters in a hierarchical fashion, and it consists of two hierarchies: one on the objects and one on the features. The two hierarchies are coupled because the clusters at a certain level in one hierarchy are coupled with the clusters at the same level of the other hierarchy and form the co-clusters. Each cluster of one of the two hierarchies thus provides insights on the clusters of the other hierarchy. Another novelty of the proposed solution is that the number of clusters is possibly unlimited. Nevertheless, the produced hierarchies are still compact and therefore more readable because our method allows multiple splits of a cluster at the lower level. As regards the second challenge, the accumulating nature of the data makes the datasets intractably huge over time. In this case, an incremental solution relieves the issue because it partitions the problem. In this paper we introduce an incremental version of our algorithm of hierarchical co-clustering. It starts from an intermediate solution computed on the previous version of the data and it updates the co-clustering results considering only the added block of data. This solution has the merit of speeding up the computation with respect to the original approach that would recompute the result on the overall dataset. In addition, the incremental algorithm guarantees approximately the same answer than the original version, but it saves much computational load. We validate the incremental approach on several high-dimensional datasets and perform an accurate comparison with both the original version of our algorithm and with the state of the art competitors as well. The obtained results open the way to a novel usage of the co-clustering algorithms in which it is advantageous to partition the data into several blocks and process them incrementally thus “incorporating” data gradually into an on-going co-clustering solution.  相似文献   
74.
我们在任何公共场所,甚至会议室都会听到独特的蜂窝电话振铃.声音透过空间传来好像近在咫尺,人们反应非常迅速,立刻检查衣袋、提包、皮带扣中的手机,看看有无亲友来电.似乎商业人士、家庭妇女、中学生和儿童都每人一个手机.然而,我们听到的只是小量辐射,未听到或见到的噪声才是真正的污染.工程师必须克服技术创新道路上的这些障碍.  相似文献   
75.
Oil and gas pipeline condition monitoring is a potentially challenging process due to varying temperature conditions, harshness of the flowing commodity and unpredictable terrains. Pipeline breakdown can potentially cost millions of dollars worth of loss, not to mention the serious environmental damage caused by the leaking commodity. The proposed techniques, although implemented on a lab scale experimental rig, ultimately aim at providing a continuous monitoring system using an array of different sensors strategically positioned on the surface of the pipeline. The sensors used are piezoelectric ultrasonic sensors. The raw sensor signal will be first processed using the discrete wavelet transform (DWT) as a feature extractor and then classified using the powerful learning machine called the support vector machine (SVM). Preliminary tests show that the sensors can detect the presence of wall thinning in a steel pipe by classifying the attenuation and frequency changes of the propagating lamb waves. The SVM algorithm was able to classify the signals as abnormal in the presence of wall thinning.  相似文献   
76.
The technologies of mobile communications pervade our society and wireless networks sense the movement of people, generating large volumes of mobility data, such as mobile phone call records and Global Positioning System (GPS) tracks. In this work, we illustrate the striking analytical power of massive collections of trajectory data in unveiling the complexity of human mobility. We present the results of a large-scale experiment, based on the detailed trajectories of tens of thousands private cars with on-board GPS receivers, tracked during weeks of ordinary mobile activity. We illustrate the knowledge discovery process that, based on these data, addresses some fundamental questions of mobility analysts: what are the frequent patterns of people’s travels? How big attractors and extraordinary events influence mobility? How to predict areas of dense traffic in the near future? How to characterize traffic jams and congestions? We also describe M-Atlas, the querying and mining language and system that makes this analytical process possible, providing the mechanisms to master the complexity of transforming raw GPS tracks into mobility knowledge. M-Atlas is centered onto the concept of a trajectory, and the mobility knowledge discovery process can be specified by M-Atlas queries that realize data transformations, data-driven estimation of the parameters of the mining methods, the quality assessment of the obtained results, the quantitative and visual exploration of the discovered behavioral patterns and models, the composition of mined patterns, models and data with further analyses and mining, and the incremental mining strategies to address scalability.  相似文献   
77.
Delivery to the proper tissue compartment is a major obstacle hampering the potential of cellular therapeutics for medical conditions. Delivery of cells within biomaterials may improve localization, but traditional and newer void‐forming hydrogels must be made in advance with cells being added into the scaffold during the manufacturing process. Injectable, in situ cross‐linking microporous scaffolds are recently developed that demonstrate a remarkable ability to provide a matrix for cellular proliferation and growth in vitro in three dimensions. The ability of these scaffolds to deliver cells in vivo is currently unknown. Herein, it is shown that mesenchymal stem cells (MSCs) can be co‐injected locally with microparticle scaffolds assembled in situ immediately following injection. MSC delivery within a microporous scaffold enhances MSC retention subcutaneously when compared to cell delivery alone or delivery within traditional in situ cross‐linked nanoporous hydrogels. After two weeks, endothelial cells forming blood vessels are recruited to the scaffold and cells retaining the MSC marker CD29 remain viable within the scaffold. These findings highlight the utility of this approach in achieving localized delivery of stem cells through an injectable porous matrix while limiting obstacles of introducing cells within the scaffold manufacturing process.  相似文献   
78.

Cancer classification is one of the main steps during patient healing process. This fact enforces modern clinical researchers to use advanced bioinformatics methods for cancer classification. Cancer classification is usually performed using gene expression data gained in microarray experiment and advanced machine learning methods. Microarray experiment generates huge amount of data, and its processing via machine learning methods represents a big challenge. In this study, two-step classification paradigm which merges genetic algorithm feature selection and machine learning classifiers is utilized. Genetic algorithm is built in MapReduce programming spirit which makes this algorithm highly scalable for Hadoop cluster. In order to improve the performance of the proposed algorithm, it is extended into a parallel algorithm which process on microarray data in distributed manner using the Hadoop MapReduce framework. In this paper, the algorithm was tested on eleven GEMS data sets (9 tumors, 11 tumors, 14 tumors, brain tumor 1, lung cancer, brain tumor 2, leukemia 1, DLBCL, leukemia 2, SRBCT, and prostate tumor) and its accuracy reached 100% for less than 25 selected features. The proposed cloud computing-based MapReduce parallel genetic algorithm performed well on gene expression data. In addition, the scalability of the suggested algorithm is unlimited because of underlying Hadoop MapReduce platform. The presented results indicate that the proposed method can be effectively implemented for real-world microarray data in the cloud environment. In addition, the Hadoop MapReduce framework demonstrates substantial decrease in the computation time.

  相似文献   
79.
The availability of data represented with multiple features coming from heterogeneous domains is getting more and more common in real world applications. Such data represent objects of a certain type, connected to other types of data, the features, so that the overall data schema forms a star structure of inter-relationships. Co-clustering these data involves the specification of many parameters, such as the number of clusters for the object dimension and for all the features domains. In this paper we present a novel co-clustering algorithm for heterogeneous star-structured data that is parameter-less. This means that it does not require either the number of row clusters or the number of column clusters for the given feature spaces. Our approach optimizes the Goodman–Kruskal’s τ, a measure for cross-association in contingency tables that evaluates the strength of the relationship between two categorical variables. We extend τ to evaluate co-clustering solutions and in particular we apply it in a higher dimensional setting. We propose the algorithm CoStar which optimizes τ by a local search approach. We assess the performance of CoStar on publicly available datasets from the textual and image domains using objective external criteria. The results show that our approach outperforms state-of-the-art methods for the co-clustering of heterogeneous data, while it remains computationally efficient.  相似文献   
80.
In this research, biodegradable blend of poly(ɛ-caprolactone) (PCL) and poly(lactic acid) (PLA) is proposed as a new material for the production of a printing plate for embossing process. Printing plates for embossing consist of raised printing elements and recessed nonimage elements. In production of printing plates, laser technology was used in order to form a relief printing plate. The embossing process is based on the principle of the pressure of the relief printing plate into the printing substrate, which causes the controlled deformation of the substrate and three-dimensional (3D) effect. Coir fibers (CFs) were added as a natural filler to PCL/PLA blends to improve and adjust the properties of produced blends. Scanning electron microscopy micrographs, dynamic mechanical analysis analysis, roughness, and hardness were measured on prepared materials, and 2D and 3D microscopy was conducted on laser engraved printing plates. Results have shown that the addition of CFs improved the mechanical properties of produced materials. DMA results indicate the semicrystalline structure of all prepared blends, and that the addition of CFs raises the elasticity of the composites. Laser engraving showed that it is possible to engrave the produced biodegradable materials and to use it as a material for production of printing plates.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号