首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
叶章浩 《软件》2020,(4):293-296
“MOOC”和“雨课堂”作为一种新型的教学工具,可以有效地弥补传统课堂教育的不足。“MOOC”与“雨课堂”相结合的混合教学模式,促进了学生的课堂参与,有利于师生之间的课堂交互,使得教学效率得以提高。本文首先对“MOOC”与“雨课堂”的概念进行了介绍,然后对当前大学计算机基础课程教学所存在的问题展开了讨论,最后根据问题提出了基于“MOOC”与“雨课堂”技术的计算机基础课程教学改革措施。这些措施,将有效的增强学生的学习兴趣,提高了教学效率,有利于学生计算机水平的提高。  相似文献   

2.
“计算机基础”课程面向的对象为非计算机专业学生,是各大高校的公共必修课之一。随着计算机技术在课程中的渗透,“计算机基础”课程日渐完善。结合低年级学生的特点,从五个方面将思政教育引入“计算机基础”课程中,力争通过全方位育人,促进课程建设与育人工作协同发展。  相似文献   

3.
《Information & Management》1988,15(5):251-254
Personal Computer (PC) sales have taken a downward turn primarily because PC systems are still not sufficiently user oriented. Some basic systems guidelines should be followed that will improve the acceptance of PCs by users that are not familiar with the technical side of a computer. A set of “rules” and some “systems analysis basics” are supplied. These will help to make future PC systems that are user oriented.  相似文献   

4.
以“数据结构”课程为导向,可以指导“C语言程序设计”课程相关内容的应用案例设计。而将“以赛促学”的思想融入计算机大类基础课程中,以成果为导向,分层次学习“C语言程序设计”和“数据结构”,循序渐进,并针对每个知识点的项目案例驱动学习兴趣,可以体现应用型本科学以致用的宗旨。  相似文献   

5.
Tracking-by-detection have become a hot topic of great interest to some computer vision applications in the recent years. Generally, the existing tracking-by-detection frameworks have difficulties with congestion, occlusion, and inaccurate detection in crowded scenes. In this paper, we propose a new framework for Multi-Object Tracking-by-Detection (MOT-bD) based on a spatio-temporal interlaced encoding video model and a specialized Deep Convolutional Neural Network (DCNN) detector. The spatio-temporal variation of objects between images are encoded into “interlaced images”. A specialized “interlaced object” convolutional deep detector is trained to detect objects in interlaced images and a classical association algorithm to perform the association between detected objects, since interlaced objects are built to increase overlap during the association step which leads to improve the MOT performance over the same detector/association algorithm applied on non-interlaced images.The effectiveness and robustness of this contribution is demonstrated by experiments on popular tracking-by-detection datasets and benchmarks such as the PETS, TUD and the MOT17 benchmark. Experimental results demonstrate that interlacing video idea has many advantages to improve the tracking performances in terms of both precision and accuracy of tracking and illustrate that the “power of video-interlacing” outperforms several state-of-the-art tracking frameworks in multiple object tracking.  相似文献   

6.
Violations of published strictures on password use have led to widespread unauthorized access to computer systems. The problem may compound as inexpert users, handicapped by inadequate guidance and ignorance of computers, are increasingly involved on networked, supposedly “user-friendly” workstations. The literature on password methods reflects a technocentric focus emphasizing security without due regard for user comfort, i.e., a “user-hostile”, system perspective. We present a “user-friendly” model for the password selection and re-creation processes rooted in cognitive psychology. The model suggests two approaches to password selection — one rooted in a nomothetic, or particularized, the other in an idiographic, or generalized, treatment of experience — that exploit principles of recall, memory aids and simple formal transformations. A third approach, exploiting environmental cues — hence recognition rather than recall — is also considered. Intermediate approaches enable tradeoffs between password security and memorability appropriate to the context and cognitive style of the user. The reduction of the approaches to practice is illustrated in numerous examples. The approaches yield passwords more vulnerable to discovery than those envisioned in system-oriented theory, yet operationally superior to many prompted by strictures reflecting a technocentric system perspective. We recommend that guidance materials on password use be made available on systems.  相似文献   

7.
The Transtaxor is a computer program which correlates a given problem definition or overall strategy, expressed as a general input syntax, with a given set of circumstances, expressed as a specific input “situation string” to produce an output formula or course of action. The transtaxor is written once for a given computer and thereafter becomes applicable to any problem which can be expressed in terms of a syntax. In order to extend the methodology to a wide variety of problems which can be expressed qualitatively, the usual syntax definitions have been extended to provide for the inclusion of relations, the division of the syntax into levels, and the paralleling of these levels through “developable agendums”. A simple but powerful “transcribing language” incorporates the “action-directing language” into the processing. The concept of “delineation” facilitates the generation of the basic parts, which are developed by the transtaxor, with the aid of a list-processing language, into progressively higher levels of the syntax until the final goal is produced. Examples are given of the application of the methodology to automatic-programming-language translation and information retrieval.  相似文献   

8.
We present the results of a community survey regarding genetic programming benchmark practices. Analysis shows broad consensus that improvement is needed in problem selection and experimental rigor. While views expressed in the survey dissuade us from proposing a large-scale benchmark suite, we find community support for creating a “blacklist” of problems which are in common use but have important flaws, and whose use should therefore be discouraged. We propose a set of possible replacement problems.  相似文献   

9.
This paper provides an in-depth analysis using six basic router functional requirements, a primary switch fabrics (SFs) selection criterion, and a semi-quantitative compliance scoring scheme for 10 SFs. The goal is to select candidates that can serve a hardware (HW)-wise scalable and bi-directionally reconfigurable Internet Protocol (IP) router. HW scalability and bi-directional HW reconfigurability for an IP router denote respectively its ability to (1) expand according to network traffic capacity growth; and (2) be functionally converted to perform in two conceptual directions on-demand: “downward” as “edge”, or “upward” as “hub” or “backbone” router according to the layer of the internet services provider’s network hierarchy it is targeted to serve at the moment. Overall result points to Hypercube, Multistage Interconnection Network (MIN), and 3-Dimensional Torus Mesh as potential candidates.  相似文献   

10.
选择计算机导论课程作为计算机类课程开展双语教学的切入点,具有现实性和可行性。结合计算机导论双语课程的开展,对教材选用、教学模式选择、教学方法融合、教学资源建立和分班教学等方面的问题进行了实践与探讨。双语教学的推广需要学校、教师和学生的共同努力。  相似文献   

11.
随着我国高职教育改革逐渐向多元化、国家化推进,提升高职学生的思想政治意识形态、计算机学科素养刻不容缓。本文从课程思政实施入手,以课程思政建设为路径进行研究,以职业发展为路径开展思政引导,阐明了“计算机应用基础”课程思政建设的方法步骤、实施要点,希望可以为全面推动高职“计算机应用基础”课程建设提供参考。  相似文献   

12.
One of the main purposes of brain-computer interfaces (BCI) is to provide persons of an alternative communication channel. This objective was firstly focused on handicapped subjects but nowadays its scope has increased to healthy persons. Usually, BCIs record brain activity using electroencephalograms (EEG), according to four main neuro-paradigms (slow cortical potentials, motor imagery, P300 component and visual evoked potentials). These analytical paradigms are not intuitive and are difficult to implement. Accordingly, this work researches an alternative neuro-paradigm called imagined speech, which refers to the internal pronunciation of words without emitting sounds or doing facial movements. Specifically, the present research is focused on the recognition of five Spanish words corresponding to the English words “up,” “down,” “left,” “right” and “select”, with which a computer cursor could be controlled. We perform an offline computer automatic classification procedure of a dataset of EEG signals from 27 subjects. The method implements a channel selection composed of two stages; the first one obtains a Pareto front and is approached as a multi-objective optimization problem dealing with the error rate and the number of channels; the second stage selects a single solution (channel combination) from the front, applying a fuzzy inference system (FIS). We assess the method’s performance through a channel combination and a test set not used to generate the front. Several FIS configurations were explored to evaluate if a FIS is able to select channel combinations that improve or, at least, keep the obtained accuracies using all channels for each subject’s data. We found that a FIS configuration, FIS3×3 (three membership functions for both input variables: error rate and the number of channels), obtained the best trade-off between the number of fuzzy rules and its accuracy (68.18% using around 7 channels). Also, the FIS3×3 obtained a similar statistically accuracy compared to the use of all channels (70.33%). Results of our method demonstrate the feasibility of using a FIS to automatically select a solution from the Pareto front to select channels applied to a problem of imagined speech classification. The presented method outperforms previous works in accuracy and showed a dependence relationship between EEG data and imagined words.  相似文献   

13.
The present paper concentrates on the issue of feature selection for unsupervised word sense disambiguation (WSD) performed with an underlying Naïve Bayes model. It introduces web N-gram features which, to our knowledge, are used for the first time in unsupervised WSD. While creating features from unlabeled data, we are “helping” a simple, basic knowledge-lean disambiguation algorithm to significantly increase its accuracy as a result of receiving easily obtainable knowledge. The performance of this method is compared to that of others that rely on completely different feature sets. Test results concerning nouns, adjectives and verbs show that web N-gram feature selection is a reliable alternative to previously existing approaches, provided that a “quality list” of features, adapted to the part of speech, is used.  相似文献   

14.
We humans usually think in words; to represent our opinion about, e.g., the size of an object, it is sufficient to pick one of the few (say, five) words used to describe size (“tiny,” “small,” “medium,” etc.). Indicating which of 5 words we have chosen takes 3 bits. However, in the modern computer representations of uncertainty, real numbers are used to represent this “fuzziness.” A real number takes 10 times more memory to store, and therefore, processing a real number takes 10 times longer than it should. Therefore, for the computers to reach the ability of a human brain, Zadeh proposed to represent and process uncertainty in the computer by storing and processing the very words that humans use, without translating them into real numbers (he called this idea granularity). If we try to define operations with words, we run into the following problem: e.g., if we define “tiny” + “tiny” as “tiny,” then we will have to make a counter-intuitive conclusion that the sum of any number of tiny objects is also tiny. If we define “tiny” + “tiny” as “small,” we may be overestimating the size. To overcome this problem, we suggest to use nondeterministic (probabilistic) operations with words. For example, in the above case, “tiny” + “tiny” is, with some probability, equal to “tiny,” and with some other probability, equal to “small.” We also analyze the advantages and disadvantages of this approach: The main advantage is that we now have granularity and we can thus speed up processing uncertainty. The main disadvantage is that in some cases, when defining symmetric associative operations for the set of words, we must give up either symmetry, or associativity. Luckily, this necessity is not always happening: in some cases, we can define symmetric associative operations. © 1997 John Wiley & Sons, Inc.  相似文献   

15.
This paper describes a computarized model that was developed to support a general modeling approach for the determination of feasible strategies for the repair/replacement of a population of units. “A Dynamic Repair/Replace Population Model” (DRRPM), is a software prototype that is capable of analyzing 12 primary streams or “groupings” a similar items with 2 operating environments per stream for a maximum planning horizon of 12 periods. Stream “groupings” are based on similar failure distributions, number of operational hours, similar operating environments etc. A simple case study is included to demonstrate the usage of the computer model. The DRRPM management and cost summaries can assist in the determinaton of appropriate maintenance programs and can facilitate systematic resource allocation for a population of repairable or replaceable units.  相似文献   

16.
从CIMS走向供应链的动态优化   总被引:13,自引:3,他引:10  
计算机集成制造系统CIMS由于其强调以计算机为基本手段,以制造为核心的集成,已不能适应近20年来发展的新形势。本文在介绍了时代变化引起怕挑战之后,讨论了供应链管理的基本概念。本文重点论述了供应链动态集成系统的构成及其功能,并比较了CIMS与供应链动态集成系统的差异。最后,文章介绍了供应链集成系统的当今发展情况及今后发展方向。  相似文献   

17.
This paper proposes a novel method of performance assessment for load control system of thermal power unit. Load control system is the most important multivariable control system. It is necessary to monitor and evaluate the performance of it. The performance evaluation index system based on covariance is defined, and the performance evaluation rules are given. In MATLAB, the double input and double output object model of the load control system is established, and the dynamic characteristics of the load control system are analyzed under the BF and TF mode. The simulation data, which is based on the parameters retuning, is used as the “benchmark data”, and the simulation data of different controllers are collected as “monitoring data”. For most of the time, the thermal power plant is under the coordinated control mode, and the principle and strategy of the two coordinated control are analyzed, and the engineering realization scheme is given. Operation data in different time periods of two different thermal power plants was acquisition and preprocessing respectively. The principle of selecting “benchmark data” is the minimum of pressure parameter. Two data segments were selected as “benchmark data”, performance assessment and analysis was carried on the data from other time periods. The results show that the validity and reliability of the method based on the evaluation index. In short, the data of the simulation and the load control system of power plant are used to demonstrate the effectiveness of the method.  相似文献   

18.
An algorithm is developed for making magnetic field “reduction-to-the-pole” computations using two-dimensional Fourier series. A significant saving in computer time is achieved by using a “look-up table” to reduce the number of trigonometric functions to be evaluated. The algorithm is incorporated into an efficient FORTRAN program, RPØLE, for processing magnetic anomalies caused solely by induction in the earth's field.  相似文献   

19.
20.
Lattice gauge theory is a technique for studying quantum field theory free of divergences. All the Monte Carlo computer calculations up to now have been performed on scalar machines. A technique has been developed for effectively vectorizing this class of Monte Carlo problems. The key for vectorizing is in finding groups in finding groups of points on the space-time lattice which are independent of each other. This requires a particular ordering of points along diagonals. A technique for matrix multiply is used which enables one to get the whole of the result matrix in one pass. The CDC CYBER 205 is most suitable for this class of problems using random “index-lists” (arising from the ordering algorithm and the use of random numbers) due to the hardware implementation of “GATHER” and “SCATTER” operations performing at a streaming-rate. A preliminary implementation of this method has executed 5 times faster than on the CDC 7600 system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号