首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Agent-based models (ABMs) have become an important tool for advancing scientific understanding in a variety of disciplines and more specifically have contributed gains to natural resource management in recent decades. However, a key challenge to their utility is the lack of convergence upon a common set of assumptions for representing key processes (such as agent decision structure), with the outcome that published ABM tools are rarely (if ever) used beyond their original development team. While a number of ABM frameworks are publicly available for use, the continued development of models from scratch is a signal of the continuing difficulty in capturing sufficient modeling flexibility in a single package. In this study we examine ABM sharing by comparing co-citation networks from several well-known ABM frameworks to those used in the land-use change modeling community. We then outline a different publication paradigm for the ABM community that could improve the sharing of model structure and help move toward convergence on a common set of tools and assumptions.  相似文献   

2.
In this paper, we propose and investigate a novel iris weight map method for iris matching stage to improve less constrained iris recognition. The proposed iris weight map considers both intra-class bit stability and inter-class bit discriminability of iris codes. We model the intra-class bit stability in a stability map to improve the intra-class matching. The stability map assigns more weight to the bits that have values more consistent with their noiseless and stable estimates obtained using a low rank approximation from a set of noisy training images. Also, we express the inter-class bit discriminability in a discriminability map to enhance the inter-class separation. We calculate the discriminability map using a 1-to-N strategy, emphasizing the bits with more discriminative power in iris codes. The final iris weight map is the combination of the stability map and the discriminability map. We conduct experimental analysis on four publicly available datasets captured in varying less constrained conditions. The experimental results demonstrate that the proposed iris weight map achieves generally improved identification and verification performance compared to state-of-the-art methods.  相似文献   

3.
Graphic Processing Units (GPUs) have greatly exceeded their initial role of graphics accelerators and have taken a new role of co-processors for computation—heavy tasks. Both hardware and software ecosystems have now matured, with fully IEEE compliant double precision and memory correction being supported and a rich set of software tools and libraries being available. This in turn has lead to their increased adoption in a growing number of fields, both in academia and, more recently, in industry. In this review we investigate the adoption of GPUs as accelerators in the field of Finite Element Structural Analysis, a design tool that is now essential in many branches of engineering. We survey the work that has been done in accelerating the most time consuming steps of the analysis, indicate the speedup that has been achieved and, where available, highlight software libraries and packages that will enable the reader to take advantage of such acceleration. Overall, we try to draw a high level picture of where the state of the art is currently at.  相似文献   

4.
Feature coding is one of the most important procedures in the bag-of-features model for image classification. In this paper, we propose a novel feature coding method called nonnegative correlation coding. In order to obtain a discriminative image representation, our method employs two correlations: the correlation between features and visual words, and the correlation between the obtained codes. The first correlation reflects the locality of codes, i.e., the visual words close to the local feature are activated more easily than the ones distant. The second correlation characterizes the similarity of codes, and it means that similar local features are likely to have similar codes. Both correlations are modeled under the nonnegative constraint. Based on the Nesterov’s gradient projection algorithm, we develop an effective numerical solver to optimize the nonnegative correlation coding problem with guaranteed quadratic convergence. Comprehensive experimental results on publicly available datasets demonstrate the effectiveness of our method.  相似文献   

5.
《Information & Management》1996,31(3):119-134
Spreadsheets have long been recognized as important tools for end-user computing. This research explores their use within business organizations. A survey was carried out to investigate the relationships among tasks, spreadsheet proficiency, usage, and satisfaction. The results suggested that the spreadsheet proficiency can have a greater impact on the tasks than the task can have on the spreadsheet proficiency. It was also found that spreadsheet users often do not use many of the commonly available spreadsheet features, and they do not appear inclined to use other software packages for their tasks, even if these packages might be more suitable. The proficiency of the spreadsheet users was not found to be related to the importance of the decisions being taken as a result of the spreadsheet analyses.  相似文献   

6.
In this paper we develop and use a framework for evaluating the success of microcomputer based integrated administrative software packages in small businesses. The results of an exploratory survey of 66 small business organizations show that small businesses have often been disappointed with their software packages. The disappointment is frequently a result of the inability of the package to adapt to the needs of the company, especially in small businesses of more than 50 employees. For small businesses of less than 20 employees the packages are too difficult to use. These findings indicate that small businesses should place more emphasis on the acquisition, especially the requirements specification, and developers should improve user-friendliness, the quality of support and documentation to fulfil the needs of the smallest of small businesses. The successful implementers—because of their experience—knew their needs, carefully acquired the package, and implemented various parts of it, benefiting from integration. There seem to be no shortcuts to success, but a determined and eager attitude of the personnel can produce success with any of the software packages in our study.  相似文献   

7.
This study critically investigates the main characteristics and features of anti-filtering packages provided by Free and Open Source Software (FOSS). For over a decade, the digital communities around the globe have used FOSS packages not only as an inexpensive way to access to information available on Internet, but also to disseminate thoughts, opinions and concerns about various socio-political and economic matters. Proxy servers and FOSS played a vital role in helping citizens in repressed countries to bypass the state imposed Internet content filtering and censorship practices. On the one hand, proxy servers act as redirectors to websites, and on the other hand, many of these servers are the main source for downloading FOSS anti-filtering software packages. These packages can provide secure web surfing via anonymous web access, data encryption, IP address masking, location concealment, browser history and cookie clean-ups but they also provide proxy software updates as well as domain name updates.The main objectives of this study are to investigate the role of FOSS packages in combating Internet content filtering and censorship and empowering citizens to effectively participate in communication discourse. By evaluating some of the well known FOSS anti-filtering packages used by Iran's digital community, this study found that despite the success of FOSS in combating filtering and state censorship, the majority of these software packages were not designed to meet the needs of Internet users. In particular, they are poorly adapted to the slow Internet connections in many developing countries such as Iran. In addition, these software packages do not meet the level of sophistication used by authorities to filter the content of the Net. Therefore, this study offers a new model that takes into account not only the existing level of the Internet infrastructure but also the growing number of Internet users demanding more effective FOSS packages for faster access to uncensored information while maintaining anonymity.  相似文献   

8.
Using the right software is increasingly critical to project success, but the choices keep getting wider and more confusing. Open source software (OSS) has entered the mix, leaving the traditional confines of the hacker community and entering large-scale, well-publicized applications. However, although some argue that it is ready for wide-scale commercial adaptation and deployment, the myriad number of OSS packages make actual adoption a real challenge. This article presents a straightforward and practical roadmap to navigate your OSS adoption considerations. We do not have a universally accepted definition of OSS. For instance, Netscape, Sun Microsystems, and Apple recently introduced what they call "community-source" versions of their popular software-the Mozilla project, Solaris, and MacOS X, respectively. Such efforts, while validating the OSS concept, also make their inclusion into the OSS community a potential topic for contention. We use the loose definition of OSS that includes publicly available source code and community-source software.  相似文献   

9.
Text, as one of the most influential inventions of humanity, has played an important role in human life, so far from ancient times. The rich and precise information embodied in text is very useful in a wide range of vision-based applications, therefore text detection and recognition in natural scenes have become important and active research topics in computer vision and document analysis. Especially in recent years, the community has seen a surge of research efforts and substantial progresses in these fields, though a variety of challenges (e.g. noise, blur, distortion, occlusion and variation) still remain. The purposes of this survey are three-fold: 1) introduce up-to-date works, 2) identify state-of-the-art algorithms, and 3) predict potential research directions in the future. Moreover, this paper provides comprehensive links to publicly available resources, including benchmark datasets, source codes, and online demos. In summary, this literature review can serve as a good reference for researchers in the areas of scene text detection and recognition.  相似文献   

10.
By representing the constraints and objective function in factorized form, graphical models can concisely define various NP-hard optimization problems. They are therefore extensively used in several areas of computer science and artificial intelligence. Graphical models can be deterministic or stochastic, optimize a sum or product of local functions, defining a joint cost or probability distribution. Simple transformations exist between these two types of models, but also with MaxSAT or linear programming. In this paper, we report on a large comparison of exact solvers which are all state-of-the-art for their own target language. These solvers are all evaluated on deterministic and probabilistic graphical models coming from the Probabilistic Inference Challenge 2011, the Computer Vision and Pattern Recognition OpenGM2 benchmark, the Weighted Partial MaxSAT Evaluation 2013, the MaxCSP 2008 Competition, the MiniZinc Challenge 2012 & 2013, and the CFLib (a library of Cost Function Networks). All 3026 instances are made publicly available in five different formats and seven formulations. To our knowledge, this is the first evaluation that encompasses such a large set of related NP-complete optimization frameworks, despite their tight connections. The results show that a small number of evaluated solvers are able to perform well on multiple areas. By exploiting the variability and complementarity of solver performances, we show that a simple portfolio approach can be very effective. This portfolio won the last UAI Evaluation 2014 (MAP task).  相似文献   

11.
The role of microcomputer data-base management (DBMS) packages in the management of medical research studies has been reviewed. The features of commercial DBMS packages which are of particular advantage have been identified.A bench-mark test, resembling stages in the conduct of a research project, was constructed and four commercial packages compared in their performance of it. The packages varied in facilities offered, and it was found that the more sophisticated ones took longer to set up to perform the task. In use the less sophisticated packages were faster, but could become tedious to use with regular tasks and more error prone with complex tasks.The selection of a DBMS package depends on the potential application. This will involve a detailed assessment of the task to be performed: as a general rule a sophisticated package will only be warranted if the study is both complex and large. In practice research departments may benefit from more than one package.  相似文献   

12.
The volume of publicly available data in biomedicine is constantly increasing. However, these data are stored in different formats and on different platforms. Integrating these data will enable us to facilitate the pace of medical discoveries by providing scientists with a unified view of this diverse information. Under the auspices of the National Center for Biomedical Ontology (NCBO), we have developed the Resource Index – a growing, large-scale ontology-based index of more than twenty heterogeneous biomedical resources. The resources come from a variety of repositories maintained by organizations from around the world. We use a set of over 200 publicly available ontologies contributed by researchers in various domains to annotate the elements in these resources. We use the semantics that the ontologies encode, such as different properties of classes, the class hierarchies, and the mappings between ontologies, in order to improve the search experience for the Resource Index user. Our user interface enables scientists to search the multiple resources quickly and efficiently using domain terms, without even being aware that there is semantics “under the hood.”  相似文献   

13.
Managing complex data and geometry in parallel structured AMR applications   总被引:2,自引:0,他引:2  
Adaptive mesh refinement (AMR) is an increasingly important simulation methodology for many science and engineering problems. AMR has the potential to generate highly resolved simulations efficiently by dynamically refining the computational mesh near key numerical solution features. AMR requires more complex numerical algorithms and programming than uniform fixed mesh approaches. Software libraries that provide general AMR functionality can ease these burdens significantly. A major challenge for library developers is to achieve adequate flexibility to meet diverse and evolving application requirements. In this paper, we describe the design of software abstractions for general AMR data management and parallel communication operations in SAMRAI, an object-oriented C++ structured AMR (SAMR) library developed at Lawrence Livermore National Laboratory (LLNL). The SAMRAI infrastructure provides the foundation for a variety of diverse application codes at LLNL and elsewhere. We illustrate SAMRAI functionality by describing how its unique features are used in these codes which employ complex data structures and geometry. We highlight capabilities for moving and deforming meshes, coupling multiple SAMR mesh hierarchies, and immersed and embedded boundary methods for modeling complex geometrical features. We also describe how irregular data structures, such as particles and internal mesh boundaries, may be implemented using SAMRAI tools without excessive application programmer effort. This work was performed under the auspices of the US Department of Energy by University of California Lawrence Livermore National Laboratory under contract number W-7405-Eng-48 and is released under UCRL-JRNL-214559.  相似文献   

14.
Analysis of human behaviour through visual information has been a highly active research topic in the computer vision community. This was previously achieved via images from a conventional camera, however recently depth sensors have made a new type of data available. This survey starts by explaining the advantages of depth imagery, then describes the new sensors that are available to obtain it. In particular, the Microsoft Kinect has made high-resolution real-time depth cheaply available. The main published research on the use of depth imagery for analysing human activity is reviewed. Much of the existing work focuses on body part detection and pose estimation. A growing research area addresses the recognition of human actions. The publicly available datasets that include depth imagery are listed, as are the software libraries that can acquire it from a sensor. This survey concludes by summarising the current state of work on this topic, and pointing out promising future research directions. For both researchers and practitioners who are familiar with this topic and those who are new to this field, the review will aid in the selection, and development, of algorithms using depth data.  相似文献   

15.
With the advantages of low storage cost and high retrieval efficiency, hashing techniques have recently been an emerging topic in cross-modal similarity search. As multiple modal data reflect similar semantic content, many works aim at learning unified binary codes. However, discriminative hashing features learned by these methods are not adequate. This results in lower accuracy and robustness. We propose a novel hashing learning framework which jointly performs classifier learning, subspace learning, and matrix factorization to preserve class-specific semantic content, termed Discriminative Supervised Hashing (DSH), to learn the discriminative unified binary codes for multi-modal data. Besides, reducing the loss of information and preserving the non-linear structure of data, DSH non-linearly projects different modalities into the common space in which the similarity among heterogeneous data points can be measured. Extensive experiments conducted on the three publicly available datasets demonstrate that the framework proposed in this paper outperforms several state-of-the-art methods.  相似文献   

16.
Sentiment lexicons and word embeddings constitute well-established sources of information for sentiment analysis in online social media. Although their effectiveness has been demonstrated in state-of-the-art sentiment analysis and related tasks in the English language, such publicly available resources are much less developed and evaluated for the Greek language. In this paper, we tackle the problems arising when analyzing text in such an under-resourced language. We present and make publicly available a rich set of such resources, ranging from a manually annotated lexicon, to semi-supervised word embedding vectors and annotated datasets for different tasks. Our experiments using different algorithms and parameters on our resources show promising results over standard baselines; on average, we achieve a 24.9% relative improvement in F-score on the cross-domain sentiment analysis task when training the same algorithms with our resources, compared to training them on more traditional feature sources, such as n-grams. Importantly, while our resources were built with the primary focus on the cross-domain sentiment analysis task, they also show promising results in related tasks, such as emotion analysis and sarcasm detection.  相似文献   

17.
Conventional computer-aided design (CAD) packages have drastically reduced the workload of the human designer and shortened the product design cycle. However, the degree of effort and volume of information required to use these tools limit their use to the later stages of design. Intelligent computer-aided design (ICAD) systems have sought to provide a more complete design tool to assist the designer in all phases of design. ICAD systems incorporate conventional CAD elements as well as knowledge engineering constructs. The level of integration between different components of an ICAD system determines its usefulness. Most commercial intelligent CAD packages are tied to a specific set of CAD tools, restricting their application domains. This dependence on specific software tools can be reduced by using general purpose modules to interface with available CAD packages. This paper discusses a method of introducing knowledge engineering technology to help develop an advanced intelligent product design system by integrating ICAD's Concept Modeller with SDRC's l-DEAS package for engineering product design. This integration is necessary because neither the Concept Modeller nor the I-DEAS package provides any unified design environment where users can access both symbolic and numerical design tools as needed to carry out design synthesis and analysis activities. Interfacing engineering design and knowledge processing together is not an easy task. The task is further complicated since it needs to be done only by those who have enough knowledge of both technologies, and also because it may result in reorganization of the traditional design process altogether. The proposed intelligent product design system uses artificial intelligence techniques to take care of human experts knowledge and it advocates the use of several commercial software packages that come from a variety of sources (and are proven to be robust) to perform design synthesis in a cost-efficient and timely manner. The technique described here is relatively easy to implement and is well suited to industrial needs.  相似文献   

18.
J. van Gurp  J. Bosch 《Software》2001,31(3):277-300
Object‐oriented frameworks provide software developers with the means to build an infrastructure for their applications. Unfortunately, frameworks do not always deliver on their promises of reusability and flexibility. To address this, we have developed a conceptual model for frameworks and a set of guidelines to build object oriented frameworks that adhere to this model. Our guidelines focus on improving the flexibility, reusability and usability (i.e. making it easy to use a framework) of frameworks. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

19.
In this article, six individual tree crown (ITC) detection/delineation algorithms are evaluated, using an image data set containing six diverse forest types at different geographical locations in three European countries. The algorithms use fundamentally different techniques, including local maxima detection, valley following (VF), region-growing (RG), template matching (TM), scale-space (SS) theory and techniques based on stochastic frameworks. The structurally complexity of the forests in the aerial images used ranges from a homogeneous plantation and an area with isolated tree crowns to an extremely dense deciduous forest type. None of the algorithms alone could successfully analyse all different cases. The study shows that it is important to partition the imagery into homogeneous forest stands prior to the application of individual tree detection algorithms. It furthermore suggests a need for a common, publicly available suite of test images and common test procedures for evaluation of individual tree detection/delineation algorithms. Finally, it highlights that, for complex forest types, monoscopic images are insufficient for consistent tree crown detection, even by human interpreters.  相似文献   

20.
Dynamic storage allocation is an important part of a large class of computer programs written in C and C + +. High-performance algorithms for dynamic storage allocation have been, and will continue to be, of considerable interest. This paper presents detailed measurements of the cost of dynamic storage allocation in 11 diverse C and C + + programs using five very different dynamic storage allocation implementations, including a conservative garbage collection algorithm. Four of the allocator implementations measured are publicly available on the Internet. A number of the programs used in these measurements are also available on the Internet to facilitate further research in dynamic storage allocation. Finally, the data presented in this paper is an abbreviated version of more extensive statistics that are also publicly available on the Internet.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号