首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
TWIG (“Transportable Word Intension Generator”) is a system that allows a robot to learn compositional meanings for new words that are grounded in its sensory capabilities. The system is novel in its use of logical semantics to infer which entities in the environment are the referents (extensions) of unfamiliar words; its ability to learn the meanings of deictic (“I,” “this”) pronouns in a real sensory environment; its use of decision trees to implicitly contrast new word definitions with existing ones, thereby creating more complex definitions than if each word were treated as a separate learning problem; and its ability to use words learned in an unsupervised manner in complete grammatical sentences for production, comprehension, or referent inference. In an experiment with a physically embodied robot, TWIG learns grounded meanings for the words “I” and “you,” learns that “this” and “that” refer to objects of varying proximity, that “he” is someone talked about in the third person, and that “above” and “below” refer to height differences between objects. Follow-up experiments demonstrate the system's ability to learn different conjugations of “to be”; show that removing either the extension inference or implicit contrast components of the system results in worse definitions; and demonstrate how decision trees can be used to model shifts in meaning based on context in the case of color words.  相似文献   

2.
Spectral-based image endmember extraction methods hinge on the ability to discriminate between pixels based on spectral characteristics alone. Endmembers with distinct spectral features (high spectral contrast) are easy to select, whereas those with minimal unique spectral information (low spectral contrast) are more problematic. Spectral contrast, however, is dependent on the endmember assemblage, such that as the assemblage changes so does the “relative” spectral contrast of each endmember to all other endmembers. It is then possible for an endmember to have low spectral contrast with respect to the full image, but have high spectral contrast within a subset of the image. The spatial-spectral endmember extraction tool (SSEE) works by analyzing a scene in parts (subsets), such that we increase the spectral contrast of low contrast endmembers, thus improving the potential for these endmembers to be selected. The SSEE method comprises three main steps: 1) application of singular value decomposition (SVD) to determine a set of basis vectors that describe most of the spectral variance for subsets of the image; 2) projection of the full image data set onto the locally defined basis vectors to determine a set of candidate endmember pixels; and, 3) imposing spatial constraints for averaging spectrally similar endmembers, allowing for separation of endmembers that are spectrally similar, but spatially independent. The SSEE method is applied to two real hyperspectral data sets to demonstrate the effects of imposing spatial constraints on the selection of endmembers. The results show that the SSEE method is an effective approach to extracting image endmembers. Specific improvements include the extraction of physically meaningful, low contrast endmembers that occupy unique image regions.  相似文献   

3.
Dun Liu  Tianrui Li 《Information Sciences》2011,181(17):3709-3722
In dealing with risk in real decision problems, decision-theoretic rough sets with loss functions aim to obtain optimization decisions by minimizing the overall risk with Bayesian decision procedures. Two parameters generated by loss functions divide the universe into three regions as the decision of acceptance, deferment and rejection. In this paper, we discuss the semantics of loss functions, and utilize the differences of losses replace actual losses to construct a new “four-level” approach of probabilistic rules choosing criteria. Ten types of probabilistic rough set models can be generated by the “four-level” approach and form two groups of models: two-way probabilistic decision models and three-way probabilistic decision models. A reasonable decision with these criteria is demonstrated by an illustration of oil investment.  相似文献   

4.
This paper discusses the steps taken to set up a digital logic course problem through a problem-based learning (PBL) constructivist approach. PBL is the learning which results from the process of working toward the understanding and resolution of a problem. The purpose of this study was to develop and implement problem-based learning in a digital logic course in a senior vocational industrial high school. Data collection included content analysis and a questionnaire survey. Content analysis was used to evaluate the students’ discussion messages, quality of dialogue, and the level of problem-solving activities. A survey was then administered to examine the students’ learning attitudes and perceptions toward this platform as a possible tool for PBL learning. Researchers found “Peer-responses” category is the most messages; the contents of messages focus on “General explanation” and “Reaction”; the level results of all groups’ problem-solving are similar; the index of the “Interaction” satisfaction level is the highest in PBL activity. Finally, some research suggestions were also proposed.  相似文献   

5.
Elections are a central model in a variety of areas. This paper studies parameterized computational complexity of five control problems in the Maximin election. We obtain the following results: constructive control by adding candidates is W[2]-hard with respect to the parameter “number of added candidates”; both constructive and destructive control by adding/deleting voters are W[1]-hard with respect to the parameter “number of added/deleted voters”.  相似文献   

6.
7.
“Urban Sprawl” is a growing concern of citizens, environmental organizations, and governments. Negative impacts often attributed to urban sprawl are traffic congestion, loss of open space, and increased pollutant runoff into natural waterways. Definitions of “Urban Sprawl” range from local patterns of land use and development to aggregate measures of per capita land consumption for given contiguous urban areas (UA). This research creates a measure of per capita land use consumption as an aggregate index for the spatially contiguous urban areas of the conterminous United States with population of 50,000 or greater. Nighttime satellite imagery obtained by the Defense Meteorological Satellite Program's Operational Linescan System (DMSP OLS) is used as a proxy measure of urban extent. The corresponding population of these urban areas is derived from a grid of the block group level data from the 1990 U.S. Census. These numbers are used to develop a regression equation between Ln(Urban Area) and Ln(Urban Population). The ‘scale-adjustment’ mentioned in the title characterizes the “Urban Sprawl” of each of the urban areas by how far above or below they are on the “Sprawl Line” determined by this regression. This “Sprawl Line” allows for a more fair comparison of “Urban Sprawl” between larger and smaller metropolitan areas because a simple measure of per capita land consumption or population density does not account for the natural increase in aggregate population density that occurs as cities grow in population. Cities that have more “Urban Sprawl” by this measure tended to be inland and Midwestern cities such as Minneapolis-St. Paul, Atlanta, Dallas-Ft. Worth, St. Louis, and Kansas City. Surprisingly, west coast cities including Los Angeles had some of the lowest levels of “Urban Sprawl” by this measure. There were many low light levels seen in the nighttime imagery around these major urban areas that were not included in either of the two definitions of urban extent used in this study. These areas may represent a growing commuter-shed of urban workers who do not live in the urban core but nonetheless contribute to many of the impacts typically attributed to “Urban Sprawl”. “Urban Sprawl” is difficult to define precisely partly because public perception of sprawl is likely derived from local land use planning decisions, spatio-demographic change in growing urban areas, and changing values and social mores resulting from differential rates of international migration to the urban areas of the United States. Nonetheless, the aggregate measures derived here are somewhat different than similar previously used measures in that they are ‘scale-adjusted’; also, the spatial patterns of “Urban Sprawl” shown here shed some insight and raise interesting questions about how the dynamics of “Urban Sprawl” are changing.  相似文献   

8.
Estimation of aerosol loadings is of great importance to the studies on global climate changes. The current Moderate-Resolution Imaging Spectroradiometer (MODIS) aerosol estimation algorithm over land is based on the “dark-object” approach, which works only over densely vegetated (“dark”) surfaces. In this study, we develop a new aerosol estimation algorithm that uses the temporal signatures from a sequence of MODIS imagery over land surfaces, particularly “bright” surfaces. The estimated aerosol optical depth is validated by Aerosol Robotic Network (AERONET) measurements. Case studies indicate that this algorithm can retrieve aerosol optical depths reasonably well from the winter MODIS imagery at seven sites: four sites in the greater Washington, DC area, USA; Beijing City, China; Banizoumbou, Niger, Africa; and Bratts Lake, Canada. The MODIS aerosol estimation algorithm over land (MOD04), however, does not perform well over these non-vegetated surfaces. This new algorithm has the potential to be used for other satellite images that have similar temporal resolutions.  相似文献   

9.
MGRS: A multi-granulation rough set   总被引:4,自引:0,他引:4  
The original rough set model was developed by Pawlak, which is mainly concerned with the approximation of sets described by a single binary relation on the universe. In the view of granular computing, the classical rough set theory is established through a single granulation. This paper extends Pawlak’s rough set model to a multi-granulation rough set model (MGRS), where the set approximations are defined by using multi equivalence relations on the universe. A number of important properties of MGRS are obtained. It is shown that some of the properties of Pawlak’s rough set theory are special instances of those of MGRS.Moreover, several important measures, such as accuracy measureα, quality of approximationγ and precision of approximationπ, are presented, which are re-interpreted in terms of a classic measure based on sets, the Marczewski-Steinhaus metric and the inclusion degree measure. A concept of approximation reduct is introduced to describe the smallest attribute subset that preserves the lower approximation and upper approximation of all decision classes in MGRS as well. Finally, we discuss how to extract decision rules using MGRS. Unlike the decision rules (“AND” rules) from Pawlak’s rough set model, the form of decision rules in MGRS is “OR”. Several pivotal algorithms are also designed, which are helpful for applying this theory to practical issues. The multi-granulation rough set model provides an effective approach for problem solving in the context of multi granulations.  相似文献   

10.
Infrared remotely sensed data can be used to estimate heat flux and thermal features of active volcanoes. The model proposed by Crisp and Baloga [Crisp, J., Baloga, S., 1990. A model for lava flows with two thermal components, Journal of Geophysical Research, 95, 1255-1270.] for active lava flows considers the thermal flux a function of the fractional area of two thermally distinct radiant surfaces. The larger surface area corresponds to the cooler crust of the flow, the smaller one to fractures in the crust. In this model, the crust temperature Tc, the temperature of the cracks Th, and the fractional area of the hottest component fh represent the three unknowns to work out. The simultaneous solution of the Planck equation (“dual-band” technique) for two distinct shortwave infrared (SWIR) bands allows to estimate any two of the parameters Tc, Th, fh, if the third is assumed [Dozier, J., 1981. A method for satellite identification of surface temperature fields of subpixel resolution. Remote Sensing Environment, 11, 221-229.]The airborne sensor MIVIS was flown on Mount Etna during the July-August 2001 eruption. This hyperspectral imaging spectrometer offers 72 bands in the SWIR range and 10 bands in thermal infrared (TIR) region of the spectrum, which can be used to solve the dual-band system without any assumptions. Therefore, we can combine three spectral MIVIS bands to obtain simultaneous solutions for the three unknowns. Here, the procedure for solving such a system is presented. It is then demonstrated that a TIR channel is required to better pinpoint solutions to the 2-components model.Finally, the spatial and statistical characteristic of the resultant MIVIS-derived temperature and flux distributions are introduced and statistics for each hot spot investigated.  相似文献   

11.
The two-dimensional Ising model in the geometry of a long stripe can be regarded as a model system for the study of nanopores. As a quasi-one-dimensional system, it also exhibits a rather interesting “phase behavior”: At low temperatures the stripe is either filled with “liquid” or “gas” and “densities” are similar to those in the bulk. When we approach a “pseudo-critical point” (below the critical point of the bulk) at which the correlation length becomes comparable to the length of the stripe, several interfaces emerge and the systems contains multiple “liquid” and “gas” domains. The transition depends on the size of the stripe and occurs at lower temperatures for larger stripes. Our results are corroborated by simulations of the three-dimensional Asakura–Oosawa model in cylindrical geometry, which displays qualitatively similar behavior. Thus our simulations explain the physical basis for the occurrence of “hysteresis critical points” in corresponding experiments.  相似文献   

12.
The rapid development of information and communication technology and the popularization of the Internet have given a boost to digitization technologies. Since 2001, The National Science Council (NSC) of Taiwan has invested a large amount of funding in the National Digital Archives Program (NDAP) to develop digital content. Some studies have indicated that most respondents had no confidence in particular digital archive websites. Thus, with the Technology Acceptance Model (TAM) as a theoretical basis, the focus of the present study was to identify the factors influencing usage. Extension of the roles of perceived playfulness and interface design was also explored to identify the reasons that digital archives might not be accepted by some users. The present study used a random sampling method to distribute questionnaires to digital archive users via e-mail. The Structural Equation Modeling (SEM) method was used to verify the appropriateness of the study model and whether the hypotheses were confirmed. Study results indicated that the “interface design” is an important factor that influences people to use the digital archives, and that it is separate from the “human factor” and the “human–computer interface” (HCI). Moreover, the results showed that HCI had a significant impact on the “perceived ease of use” and on “usage intentions.” However, the human factor interface showed a significant impact only on “perceived ease of use.” With respect to the hypotheses regarding “usage intentions,” the “perceived usefulness,” “perceived ease of use,” “attitude,” and “perceived playfulness” were not related to “usage intentions.” Therefore, it is necessary to consider the quality of interface design in the development of digital archives in order to promote usage.  相似文献   

13.
Practical training is what brings imagination and creativity to fruition, which relies significantly on the relevant technical skills needed. Thus, the current study has placed its emphasis on strengthening the learning of technical skills with emerging innovations in technology, while studying the effects of employing such technologies at the same time. As for the students who participated in the study, technical skills had been cultivated in the five dimensions of knowledge, comprehension, simulation, application, and creativity, in accordance to the set teaching objectives and the taxonomy for students learning outcome, while the virtual reality learning environment (VRLE) has also been developed to meet different goals as the various technical skills were being examined. In terms of the nature of technology, operation of machines, selection of process parameters, and process planning in technical skills, VRLE has also designed the six modules of “learning resource”, “digital content”, “collaborative learning”, “formative evaluation”, “simulation of manufacturing process”, and “practical exercise” in particular for providing students with assistance in the development on their technical skills on a specific, gradual basis. After assessing the technical skills that have been developed for the time period of one semester, the students have reported finding VRLE to be a significantly effective method when considering the three dimensions of “operation of machines”, “selection of process parameter”, and “process planning”, though not so much so when it came to the dimension of “nature of technology”. Among the six modules, “simulation of manufacturing process” and “practical exercise” were the two that were most preferred by students for the three dimensions considered.  相似文献   

14.
Asian dust storm outbreaks significantly influence air quality, weather, and climate. Therefore, it is desirable to have qualitative and quantitative information on the time, location, and coverage of these outbreaks at high spatial and temporal resolution. The imager on board the Indian metrological geostationary satellite INSAT-3D observes Asia at a temporal resolution of 30 min and a spatial resolution of 1, 4, 8, and 4 km in the visible, middle infrared (MIR), water vapour (WV), and thermal infrared (TIR) bands, respectively. In this article, an algorithm is described for detecting desert dust storms from INSAT-3D imager data. The algorithm described here is a combination of various pre-existing methods such as infrared split-window, MIR and TIR brightness temperature difference, and visible to MIR reflectance ratio, which are based on the fact that dust exhibits features of spectral dependence and contrast over the visible, MIR, and TIR spectrum that are different from clouds, surface, and clear-sky atmosphere. Using the Atmospheric Infrared Sounder (Aqua/AIRS) dust score as proxy, INSAT-3D dust storm products were tested under different scenarios such as dust storms and dust transport in Asia. TIR observations from the geostationary platform of INSAT-3D allows computation of the infrared difference dust index (IDDI), which gives a quantitative measure of dust loading relative to clear atmosphere. Moreover, due to the high temporal resolution (30 min) of INSAT-3D observations, INSAT-3D-derived dust products allow more precise monitoring of dust transportation as compared with dust products derived from polar satellite observations.  相似文献   

15.
In this paper, two significant weaknesses of locally linear embedding (LLE) applied to computer vision are addressed: “intrinsic dimension” and “eigenvector meanings”. “Topological embedding” and “multi-resolution nonlinearity capture” are introduced based on mathematical analysis of topological manifolds and LLE. The manifold topological analysis (MTA) method is described and is based on “topological embedding”. MTA is a more robust method to determine the “intrinsic dimension” of a manifold with typical topology, which is important for tracking and perception understanding. The manifold multi-resolution analysis (MMA) method is based on “multi-resolution nonlinearity capture”. MMA defines LLE eigenvectors as features for pattern recognition and dimension reduction. Both MTA and MMA are proved mathematically, and several examples are provided. Applications in 3D object recognition and 3D object viewpoint space partitioning are also described.  相似文献   

16.
This study investigated the effects of pushing a wheelchair on the energy cost of walking (Cw; defined as the ratio of the steady-state oxygen consumption to the walking speed) and economical speed (ES) on the level and ±5% gradients. Eight pairs were formed from twelve young men to minimize variation in body weight between pushing and assisted participants. The State Trait Anxiety Inventory (STAI) test was conducted to evaluate wheelchair occupants' anxiety before and after each trial. The Cw values were significantly higher when pushing a wheelchair on the uphill gradient at more than 45 m/min. ES was significantly lower when pushing a wheelchair on the level (−8.5%) and uphill gradient (−9.1%), but not on the downhill gradient (−0.3%). Individual ES was also estimated using the concept of “Froude number”, and “estimated” ES was significantly correlated with “measured” ES even when pushing a wheelchair on the downhill gradient. The STAI score was not significantly increased except at 105 m/min, regardless of gradient. These results indicated that the fastest walking speed without an enhancement in wheelchair occupants' anxiety corresponds to ES when pushing a wheelchair with a seated occupant on all gradients, at least in young fit men.  相似文献   

17.
This study investigated the effects of positive interdependence and group processing on student achievement and attitude in online learning. Students in three university courses received initial instruction about teamwork skills and cooperative learning and were randomly assigned to one of three treatment groups in each course. The “positive interdependence” and the “group processing” groups received subsequent associated skills training. The “no structure” control group received no additional training. Following the treatment, the “positive interdependence” groups had significantly higher achievement than the “group processing” or the “no structure” groups. There was no significant difference among any of the three groups on student attitude.  相似文献   

18.
Our vision for the future of composition focuses on the “tube” and the culture inspired by online video sharing. Understanding composition in 2020 requires further theorizing about the participatory practices occurring in online video culture. Based on practices found on the platform YouTube, we turn to the term “tubing” to explain phenomena taking place there, and we put forward the concept of “participatory pedagogy” that we see emerging in 21st century classrooms. The ubiquitous and historically loaded “tube” (noun) and its YouTube-specific counterpart “tubing” (verb), explain many of the shifts taking place as acts of writing expand to include participation in online video sharing. Other scholars have forwarded the notion of “postpedagogy” (Vitanza, 1991; Davis, 2000; Arroyo, 2003, 2005; Rickert, 2007), which places a high value on invention, encourages the playful, yet serious linking of disparate historical figures, and opens up new pathways that we see as working in tandem with what George Siemens (2005) called a “pedagogy of participation,” an offshoot of what Henry Jenkins named “participatory culture” (2009). Using tubing as a guiding metaphor, we develop our version of “participatory pedagogy” for 2020 by focusing on the propagation of Internet memes and the inventional possibilities found in the everyday practices of video culture, which create an historical archive, an untapped repository of cultural patterns, and a light yet ruthlessly public demand for participation.  相似文献   

19.
The digital content industry is flourishing as a result of the rapid development of technology and the widespread use of computer networks. As has been reported, the market size of the global e-learning (i.e., distance education and telelearning) will reach USD 49.6 billion in 2014. However, to retain and/or increase the market share associated with e-learning, it is important to maintain or increase service quality in this sector. This research was intended to develop an analytical model for enhancing the service quality of e-learning using a hybrid approach from the perspective of customers. The evaluation methodology integrates the three methods: rough set theory (RST), quality function deployment (QFD), and grey relational analysis (GRA). First, important criteria affecting service quality (referred to as customer requirements (CRs)) and relevant technical information (referred to as technical requirements (TRs)) for e-learning are compiled from an extensive literature review. Using the data regarding customer satisfaction collected from a questionnaire survey, RST is then used to reduce the number of attributes considered and to determine the CRs. Furthermore, in consultation with domain experts, QFD is used together with GRA to analyze the interrelationships between CRs (which represent the voice of customer (VOC)) and TRs (which represent the voice of the engineer (VOE)) and to create an order of priority for the TRs given the CRs based on objective weighting using the entropy value. An illustrative example is provided—an empirical analysis of the students who participated in the e-learning program at a particular university. The results reveal that of the fourteen TRs, “Curriculum development” has the greatest effect on e-learning service quality, followed by “Evaluation”, “Guidance and tracing”, “Instructional design”, and “Teaching materials”. Both the CRs and the TRs may vary depending on the individual organization. Nevertheless, the proposed model can be a useful point of reference for e-learning service providers, helping them to identify the TRs that they can use to enhance service quality and to target vital CRs.  相似文献   

20.
Scholars have begun naming and defining terms that describe the multifaceted kinds of composing practices occurring in their classrooms and scholarship. This paper analyzes the terms “multimedia” and “multimodal,” examining how each term has been defined and presenting examples of documents, surveys, web sites and others to show when and how each term is used in both academic and non-academic/industry contexts. This paper shows that rather than the use of these terms being driven by any difference in their definitions, their use is more contingent upon the context and the audience to whom a particular discussion is being directed. While “multimedia” is used more frequently in public/industry contexts, “multimodal” is preferred in the field of composition and rhetoric. This preference for terms can be best explained by understanding the differences in how texts are valued and evaluated in these contexts. “Multimodal” is a term valued by instructors because of its emphasis on design and process, whereas “multimedia” is valued in the public sphere because of its emphasis on the production of a deliverable text. Ultimately, instructors need to continue using both terms in their teaching and scholarship because although “multimodal” is a term that is more theoretically accurate to describe the cognitive and socially situated choices students are making in their compositions, “multimedia” works as a gateway term for instructors and scholars to interface with those outside of academia in familiar and important ways.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号