首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   207篇
  免费   39篇
  国内免费   38篇
电工技术   9篇
综合类   17篇
金属工艺   2篇
机械仪表   13篇
建筑科学   1篇
矿业工程   1篇
轻工业   1篇
武器工业   1篇
无线电   48篇
一般工业技术   8篇
自动化技术   183篇
  2023年   1篇
  2022年   12篇
  2021年   6篇
  2020年   13篇
  2019年   13篇
  2018年   21篇
  2017年   30篇
  2016年   39篇
  2015年   39篇
  2014年   80篇
  2013年   23篇
  2012年   6篇
  2011年   1篇
排序方式: 共有284条查询结果,搜索用时 15 毫秒
101.
针对当下人们自主运动训练时出现的动作不标准且无人监督指导的问题,设计一种基于Kinect传感器的运动训练辅助系统。利用Kinect捕捉人体的关节点坐标提取特征,采用动态时间规整算法实现运动动作的识别,将模板动作与实时动作进行匹配,根据匹配的程度给予评价。实验结果表明该系统对于运动动作的平均正确识别率达到91.25%,正确评价率达到95.9%,能及时有效地反馈评价,起到一定运动训练辅助作用。  相似文献   
102.
We present a system for fast capture of personalized 3D avatar using two Kinects. The key feature of the system is that the capturing process can be finished in a moment, or quantitatively 3 s, which is short enough for the person being captured to hold a static pose stably and comfortably. This fast capture is achieved by using two calibrated Kinects to capture the front and back side of the person simultaneously. To alleviate the view angle limit, the two Kinects are driven by their automatic motors to capture three scans covering the upper, middle and lower part of the person from front and back respectively, resulting in three partial scans for each Kinect. After denoising, all partial scans are rigidly aligned together using a novel supersymmetric third-order graph matching algorithm. Since all these partial scans can be captured in a moment, the discrepancy between them caused by body movement is neglectable, saving the effort of non-rigid alignment. The missing gaps between the front and back scans are filled using quadratic Bézier curve. The final reconstructed mesh model demonstrates good fidelity against the person with personalized details of hairstyle, face, and salient cloth wrinkles.  相似文献   
103.
A new approach enabling a mobile robot to recognize and classify furniture-like objects composed of assembled parts using a Microsoft Kinect is presented. Starting from considerations about the structure of furniture-like objects, i.e., objects which can play a role in the course of a mobile robot mission, the 3D point cloud returned by the Kinect is first segmented into a set of “almost convex” clusters. Objects are then represented by means of a graph expressing mutual relationships between such clusters. Off-line, snapshots of the same object taken from different positions are processed and merged, in order to produce multiple-view models that are used to populate a database. On-line, as soon as a new object is observed, a run-time window of subsequent snapshots is used to search for a correspondence in the database.Experiments validating the approach with a set of objects (i.e., chairs, tables, but also other robots) are reported and discussed in detail.  相似文献   
104.
针对Kinect红外图像噪声大、对比度低等问题,通过对其进行特征分析提出了增强Kinect红外场景的方法。首先,通过OCTM线性规划方法提高红外图像对比度;其次,结合Kinect红外场景的频域特征,找到噪声频率;最后运用频域带阻滤波器和双边滤波相结合的方法对图像进行图像增强,进而达到去除噪声的同时保持边缘细节的效果。为验证方法的有效性和实用性,对不同客观条件下的红外场景进行了实验,并对多组实拍场景进行了主观和客观方面的测试。通过实验和测试,证明此方法能有效增强Kinect红外场景,且在去噪同时很好地保留了图像边缘信息。  相似文献   
105.
机器人控制技术的不断发展,人与机器人之间的交互方式正朝着方便、快捷的方向发展。为了实现机器人控制的便捷性,采用自然用户界面(NUI)交互方式,设计了一种基于Kinect V2.0体感传感器的6自由度机械手臂控制系统;该控制系统以VS2015+Kinect SDK2.0为编程环境,编制控制程序;以Kinect V2.0骨骼数据控制6自由度机械手臂;使用人体肩、肘的旋转角度和抓手动作,分别来控制机械手臂的6个关节;通过串口通信方式,以Arduino为下位机硬件核心,验证了该方法的有效性。实验表明,此基于NUI交互方式的控制系统能有效且快捷地控制机械臂进行物体的抓取。  相似文献   
106.
针对Kinect相机存在的固有噪声,在机器人视觉定位与建图中提出一种改进ORB特征匹配算法结合改进环境测量模型的SLAM系统,该系统使用改进ORB算法提取图像的特征点,建立相邻帧之间特征点的对应关系,并对深度图进行滤波;使用ICP算法计算机器人运动,通过环境测量模型去除误匹配点,同时进行回环检测,最后使用g2o对位姿进行全局优化,建立环境点云图。通过实际环境与公开数据集的运行测试,结果表明,该V-SLAM能够准确地完成相机位姿的更新,并建立环境点云图。  相似文献   
107.
为了实现手语视频中手语字母的准确识别,提出了一种基于DI_CamShift和SLVW的算法。该方法将Kinect作为手语视频采集设备,在获取彩色视频的同时得到其深度信息;计算深度图像中手语手势的主轴方向角和质心位置,通过调整搜索窗口对手势进行准确跟踪;使用基于深度积分图像的Ostu算法分割手势,并提取其SIFT特征;构建了SLVW词包作为手语特征,并用SVM进行识别。通过实验验证该算法,其单个手语字母最好识别率为99.87%,平均识别率96.21%。  相似文献   
108.
Several augmented reality systems have been proposed for different target fields such as medical, cultural heritage and military. However, most of the current AR authoring tools are actually programming interfaces that are exclusively suitable for programmers. In this paper, we propose an AR authoring tool which provides advanced visual effect, such as occlusion or media contents. This tool allows non-programming users to develop low-cost AR applications, specially oriented to on-site assembly and maintenance/repair tasks. A new 3D edition interface is proposed, using photos and Kinect depth information to improve 3D scenes composition. In order to validate our AR authoring tool, two evaluations have been performed, to test the authoring process and the task execution using AR. The evaluation results show that overlaying 3D instructions on the actual work pieces reduces the error rate for an assembly task by more than a 75%, particularly diminishing cumulative errors common in sequential procedures. Also, the results show how the new edition interface proposed, improves the 3D authoring process making possible create more accurate AR scenarios and 70% faster.  相似文献   
109.
微软公司 2010 年推出的 Kinect 深度传感器能够同步提供场景深度和彩色信息,其应用的一个关键领域就是目标 识别。传统的目标识别大多限制在特殊的情形,如:手势识别、人脸识别,而大规模的目标识别是近年来的研究趋势。 通过 Kinect 得到的 RGB-D 数据集多为室内和办公环境下获取的多场景、多视角、分目标类型的数据集,为大规模的目标 识别算法设计提供了学习基础。同时,Kinect 获取的深度信息为目标识别提供了强有力的线索,利用深度信息的识别方法 较以前的方法具有无法比拟的优势,大大地提高了识别的精度。文章首先对 Kinect 的深度获取技术做了详细介绍;其次 对现有的 3D 目标识别方法进行综述,接着对已有的 3D 测试数据集进行分析和比较;最后对文章进行小结以及对未来 3D 目标识别算法和 3D 测试数据集的发展趋势作了简单的阐述。  相似文献   
110.
In this paper, a 3D computer vision system for cognitive assessment and rehabilitation based on the Kinect device is presented. It is intended for individuals with body scheme dysfunctions and left–right confusion. The system processes depth information to overcome the shortcomings of a previously presented 2D vision system for the same application. It achieves left and right-hand tracking, and face and facial feature detection (eye, nose, and ears) detection. The system is easily implemented with a consumer-grade computer and an affordable Kinect device and is robust to drastic background and illumination changes. The system was tested and achieved a successful monitoring percentage of 96.28%. The automation of the human body parts motion monitoring, its analysis in relation to the psychomotor exercise indicated to the patient, and the storage of the result of the realization of a set of exercises free the rehabilitation experts of doing such demanding tasks. The vision-based system is potentially applicable to other tasks with minor changes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号