首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The design and selection of 3D modeled hand gestures for human–computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human–computer input.  相似文献   

2.
Intelligent Service Robotics - This paper introduces an approach to automatic domain modeling for human–robot interaction. The proposed approach is symbolic and intended for semantically...  相似文献   

3.
4.
Firms are increasingly involving users in new product development (NPD). Their product users frequently provide solution information, such as new product ideas. However, these users are often considered a homogeneous group of ordinary users; their individual abilities and the specific input they provide for NPD are not yet well understood. The goal of this paper is to determine whether different types of users are differently predisposed to produce ideas. We derive hypotheses regarding the possible outcome of involving different user types in idea generation tasks from the current literature on customer integration into NPD. In a quasi‐experimental setting, we test our assumptions on 93 users, who generate ideas in a smart home context. The results indicate that users’ contribution depends on their specific domain knowledge, which is broadly understood as knowledge of a specific area that influences ideation towards solutions in this domain. We distinguish between four types of users: those with high trend awareness, high technical skills, high technical innovativeness, and high ethical reflectiveness. We find that users with high technical skills are more likely to produce ideas that are technically feasible. Trend‐aware and technically innovative users produce ideas of greater originality. Ethically reflective users tend to come up with ideas that will have a positive impact on society.  相似文献   

5.
We couple pseudo-particle modeling (PPM, Ge and Li in Chem Eng Sci 58(8):1565–1585, 2003), a variant of hard-particle molecular dynamics, with standard soft-particle molecular dynamics (MD) to study an idealized gas–liquid flow in nano-channels. The coupling helps to keep sharp contrast between gas and liquid behaviors and the simulations conducted provide a reference frame for exploring more complex and realistic gas–liquid nano-flows. The qualitative nature and general flow patterns of the flow under such extreme conditions are found to be consistent with its macro-scale counterpart.
Wei GeEmail:
  相似文献   

6.
Digital Anthropological Resources for Teaching (DART) is a major project examining ways in which the use of online learning activities and repositories can enhance the teaching of anthropology and, by extension, other disciplines. This paper reports on one strand of DART activity, the development of customisable learning activities that can be repurposed for use in multiple contexts. Three examples of these activities are described and, based on their use and reuse, some key lessons for the learning technology community are identified. In particular, it is argued that repurposing is a route to successful reuse, and that engaging the teacher in a participative design process is an essential part of the repurposing process.  相似文献   

7.
Transthyretin (TTR) is a protein whose aggregation and deposition causes amyloid diseases in human beings. Amyloid fibril formation is prevented by binding of thyroxin (T4) or its analogs to TTR. The MD simulation study of several solvated X-ray structures of apo and holo TTR has indicated the role of a conserved water molecule and its interaction with T4 binding residues Ser117 and Thr119. Geometrical and electronic consequences of those interactions have been exploited to design a series of thyroxin analogs (Mod1–4) by modifying 5′ or 3′ or both the iodine atoms of thyroxin. Binding energy of the designed ligands has been calculated by docking the molecules in tetrameric structure of the protein. Theoretically investigated pharmacological parameters along with the binding energy data indicate the potentiality of 3′,5′-diacetyl-3,5-dichloro-l-thyronine (Mod4) to act as a better inhibitor for TTR-related amyloid diseases.  相似文献   

8.
9.
10.
BackgroundHypoxia-inducible factor 2 alpha (HIF2α), prolyl hydroxylase domain protein 2 (PHD2), and the von Hippel Lindau tumor suppressor protein (pVHL) are three principal proteins in the oxygen-sensing pathway. Under normoxic conditions, a conserved proline in HIF2α is hydroxylated by PHD2 in an oxygen-dependent manner, and then pVHL binds and promotes the degradation of HIF2α. However, the crystal structure of the HIF2α-pVHL complex has not yet been established, and this has limited research on the interaction between HIF and pVHL. Here, we constructed a structural model of a 23-residue HIF2α peptide (528–550)-pVHL-ElonginB-ElonginC complex by using homology modeling and molecular dynamics simulations. We also applied these methods to HIF2α mutants (HYP531PRO, F540L, A530 V, A530T, and G537R) to reveal structural defects that explain how these mutations weaken the interaction with pVHL.MethodsHomology modeling and molecular dynamics simulations were used to construct a three-dimensional (3D) structural model of the HIF2α-VHL complex. Subsequently, MolProbity, an active validation tool, was used to analyze the reliability of the model. Molecular mechanics energies combined with the generalized Born and surface area continuum solvation (MM-GBSA) and solvated interaction energy (SIE) methods were used to calculate the binding free energy between HIF2a and pVHL, and the stability of the simulation system was evaluated by using root mean square deviation (RMSD) analysis. We also determined the secondary structure of the system by using the definition of secondary structure of proteins (DSSP) algorithm. Finally, we investigated the structural significance of specific point mutations known to have clinical implications.ResultsWe established a reliable structural model of the HIF2α-pVHL complex, which is similar to the crystal structure of HIF1α in 1LQB. Furthermore, we compared the structural model of the HIF2α-pVHL complex and the HIF2α (HYP531P, F540L, A530V, A530T, and G537R)-pVHL mutants on the basis of RMSD, DSSP, binding free energy, and hydrogen bonding. The experimental data indicate that the stability of the structural model of the HIF2α-pVHL complex is higher than that of the mutants, consistently with clinical observations.ConclusionsThe structural model of the HIF2α-pVHL complex presented in this study enhances understanding of how HIF2α is captured by pVHL. Moreover, the important contact amino acids that we identified may be useful in the development of drugs to treat HIF2a-related diseases.  相似文献   

11.
Cancer is a complex disease resulting from the uncontrolled proliferation of cell signaling events. Protein kinases have been identified as central molecules that participate overwhelmingly in oncogenic events, thus becoming key targets for anticancer drugs. A majority of studies converged on the idea that ligand-binding pockets of kinases retain clues to the inhibiting abilities and cross-reacting tendencies of inhibitor drugs. Even though these ideas are critical for drug discovery, validating them using experiments is not only difficult, but also in some cases infeasible. To overcome these limitations and to test these ideas at the molecular level, we present here the results of receptor-focused in-silico docking of nine marketed drugs to 19 different wild-type and mutated kinases chosen from a wide range of families. This investigation highlights the need for using relevant models to explain the correct inhibition trends and the results are used to make predictions that might be able to influence future experiments. Our simulation studies are able to correctly predict the primary targets for each drug studied in majority of cases and our results agree with the existing findings. Our study shows that the conformations a given receptor acquires during kinase activation, and their micro-environment, defines the ligand partners. Type II drugs display high compatibility and selectivity for DFG-out kinase conformations. On the other hand Type I drugs are less selective and show binding preferences for both the open and closed forms of selected kinases. Using this receptor-focused approach, it is possible to capture the observed fold change in binding affinities between the wild-type and disease-centric mutations in ABL kinase for Imatinib and the second-generation ABL drugs. The effects of mutation are also investigated for two other systems, EGFR and B-Raf. Finally, by including pathway information in the design it is possible to model kinase inhibitors with potentially fewer side-effects.  相似文献   

12.
13.
《Computers & Security》1986,5(2):122-127
This article demonstrates the means of implementing an ongoing security prevention system, starting from the design phase of information systems and continuing through all other project phases during the whole life cycle of the system.  相似文献   

14.
15.
The aim of this paper is to show the interest in fitting features with an α-stable distribution to classify imperfect data. The supervised pattern recognition is thus based on the theory of continuous belief functions, which is a way to consider imprecision and uncertainty of data. The distributions of features are supposed to be unimodal and estimated by a single Gaussian and α-stable model. Experimental results are first obtained from synthetic data by combining two features of one dimension and by considering a vector of two features. Mass functions are calculated from plausibility functions by using the generalized Bayes theorem. The same study is applied to the automatic classification of three types of sea floor (rock, silt and sand) with features acquired by a mono-beam echo-sounder. We evaluate the quality of the α-stable model and the Gaussian model by analyzing qualitative results, using a Kolmogorov–Smirnov test (K–S test), and quantitative results with classification rates. The performances of the belief classifier are compared with a Bayesian approach.  相似文献   

16.
3D surface reconstruction and motion modeling has been integrated in several industrial applications. Using a pan–tilt–zoom (PTZ) camera, we present an efficient method called dynamic 3D reconstruction (D3DR) for recovering the 3D motion and structure of a freely moving target. The proposed method estimates the PTZ measurements to keep the target in the center of the field of view (FoV) of the camera with the same size. Feature extraction and tracking approach are used in the imaging framework to estimate the target's translation, position, and distance. A selection strategy is used to select keyframes that show significant changes in target movement and directly update the recovered 3D information. The proposed D3DR method is designed to work in a real-time environment, not requiring all frames captured to be used to update the recovered 3D motion and structure of the target. Using fewer frames minimizes the time and space complexity required. Experimental results conducted on real-time video streams using different targets to prove the efficiency of the proposed method. The proposed D3DR has been compared to existing offline and online 3D reconstruction methods, showing that it uses less execution time than the offline method and uses an average of 49.6% of the total number of frames captured.  相似文献   

17.
Massively parallel computers now permit the molecular dynamics (MD) simulation of multi-million atom systems on time scales up to the microsecond. However, the subsequent analysis of the resulting simulation trajectories has now become a high performance computing problem in itself. Here, we present software for calculating X-ray and neutron scattering intensities from MD simulation data that scales well on massively parallel supercomputers. The calculation and data staging schemes used maximize the degree of parallelism and minimize the IO bandwidth requirements. The strong scaling tested on the Jaguar Petaflop Cray XT5 at Oak Ridge National Laboratory exhibits virtually linear scaling up to 7000 cores for most benchmark systems. Since both MPI and thread parallelism is supported, the software is flexible enough to cover scaling demands for different types of scattering calculations. The result is a high performance tool capable of unifying large-scale supercomputing and a wide variety of neutron/synchrotron technology.Program summaryProgram title: SassenaCatalogue identifier: AELW_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AELW_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: GNU General Public License, version 3No. of lines in distributed program, including test data, etc.: 1 003 742No. of bytes in distributed program, including test data, etc.: 798Distribution format: tar.gzProgramming language: C++, OpenMPIComputer: Distributed Memory, Cluster of Computers with high performance network, SupercomputerOperating system: UNIX, LINUX, OSXHas the code been vectorized or parallelized?: Yes, the code has been parallelized using MPI directives. Tested with up to 7000 processorsRAM: Up to 1 Gbytes/coreClassification: 6.5, 8External routines: Boost Library, FFTW3, CMAKE, GNU C++ Compiler, OpenMPI, LibXML, LAPACKNature of problem: Recent developments in supercomputing allow molecular dynamics simulations to generate large trajectories spanning millions of frames and thousands of atoms. The structural and dynamical analysis of these trajectories requires analysis algorithms which use parallel computation and IO schemes to solve the computational task in a practical amount of time. The particular computational and IO requirements very much depend on the particular analysis algorithm. In scattering calculations a very frequent pattern is that the trajectory data is used multiple times to compute different projections and aggregates this into a single scattering function. Thus, for good performance the trajectory data has to be kept in memory and the parallel computer has to have enough RAM to store a volatile version of the whole trajectory. In order to achieve high performance and good scalability the mapping of the physical equations to a parallel computer needs to consider data locality and reduce the amount of the inter-node communication.Solution method: The physical equations for scattering calculations were analyzed and two major calculation schemes were developed to support any type of scattering calculation (all/self). Certain hardware aspects were taken into account, e.g. high performance computing clusters and supercomputers usually feature a 2 tier network system, with Ethernet providing the file storage and infiniband the inter-node communication via MPI calls. The time spent loading the trajectory data into memory is minimized by letting each core only read the trajectory data it requires. The performance of inter-node communication is maximized by exclusively utilizing the appropriate MPI calls to exchange the necessary data, resulting in an excellent scalability. The partitioning scheme developed to map the calculation onto a parallel computer covers a wide variety of use cases without negatively effecting the achieved performance. This is done through a 2D partitioning scheme where independent scattering vectors are assigned to independent parallel partitions and all communication is local to the partition.Additional comments: !!!!! The distribution file for this program is approximately 36 Mbytes and therefore is not delivered directly when download or E-mail is requested. Instead an html file giving details of how the program can be obtained is sent. !!!!!Running time: Usual runtime spans from 1 min on 20 nodes to 2 h on 2000 nodes. That is 0.5–4000 CPU hours per execution.  相似文献   

18.
A sufficient condition is presented for two-dimensional images on a finite rectangular domain Ω=(?A,A)×(?B,B) to be completely determined by features on curves t?(ξ(t),t) in the three-dimensional domain Ω×(0,∞) of an α-scale space. For any fixed finite set of points in the image, the values of the α-scale space at these points at all scales together do not provide sufficient information to reconstruct the image, even if spatial derivatives up to any order are included as well. On the other hand, the image is completely fixed by the values of the scale space and its derivative along any straight line in Ω×(0,∞) for which ξ:(0,∞)→Ω is linear but not constant. A similar result holds for curves for which ξ is of the form ξ(t)=(ξ 1(t),0) with ξ 1 periodic and not constant. If the locations at which the scale space is evaluated form a curve on a cylinder in Ω×(0,∞) with some periodic structure, like a helix, then it is sufficient to evaluate the α-scale space without spatial derivatives.  相似文献   

19.
Today, various Science Gateways created in close collaboration with scientific communities provide access to remote and distributed HPC, Grid and Cloud computing resources and large-scale storage facilities. However, as we have observed there are still many entry barriers for new users and various limitations for active scientists. In this paper we present our latest achievements and software solutions that significantly simplify the use of large scale and distributed computing. We describe several Science Gateways that have been successfully created with the help of our application tools and the QCG (Quality in Cloud and Grid) middleware, in particular Vine Toolkit, QCG-Portal and QCG-Now, and make the use of HPC, Grid and Cloud more straightforward and transparent. Additionally, we share the best practices and lessons learned after creating jointly with user communities many domain-specific Science Gateways, e.g. dedicated for physicists, medical scientists, chemists, engineers and external communities performing multi-scale simulations. As our deployed software solutions have reached recently a critical mass of active users in the PLGrid e-infrastructure in Poland, we also discuss in this paper how changing technologies, visual design and user experience could impact the way we should re-design Science Getaways or even develop new attractive tools, e.g. desktop or mobile-based applications in the future. Finally, we present information and statistics regarding the behaviour of users to help readers understand how new capabilities and functionalities may influence the growth of user interest in Science Gateways and HPC technologies.  相似文献   

20.
Modeling has become a common practice in modern software engineering. Since the mid 1990s the Unified Modeling Language (UML) has become the de facto standard for modeling software systems. The UML is used in all phases of software development: ranging from the requirement phase to the maintenance phase. However, empirical evidence regarding the effectiveness of modeling in software development is few and far apart. This paper aims to synthesize empirical evidence regarding the effectiveness of modeling using UML in software development, with a special focus on the cost and benefits.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号