The paper argues that an effective solution to information and knowledge management (KM) needs of practitioners in the construction industry can be found in the provision of an adapted knowledge environment that makes use of user profiling and document summarization techniques based on information retrieval sciences. The conceptualization of the domain through ontology takes a pivotal role in the proposed knowledge environment and provides a semantic referential to ensure relevance, accuracy, and completeness of information. A set of KM services articulated around the selected ontology have been developed, using the Web services model, tested, and validated in real organizational settings. This provided the basis for formulating recommendations and key success factors for any KM project development. 相似文献
Silicon - This work fully depends on the silicon nanoparticles. It is represented as SiNPs. This depends on the transparent LEDs color converters. The spectrum obtained is fully white, so it is... 相似文献
Geographic routing protocols use location information when they need to route packets. In the meantime, location information are maintained by location-based services provided by network nodes in a distributed manner. Routing and location services are very related but are used separately. Therefore, the overhead of the location-based service is not considered when we evaluate the geographic routing overhead. Our aim is to combine routing protocols with location-based services in order to reduce communication establishment latency and routing overhead. 相似文献
In the density classification problem, a binary cellular automaton (CA) should decide whether an initial configuration contains more 0s or more 1s. The answer is given when all cells of the CA agree on a given state. This problem is known for having no exact solution in the case of binary deterministic one-dimensional CA. We investigate how randomness in CA may help us solve the problem. We analyse the behaviour of stochastic CA rules that perform the density classification task. We show that describing stochastic rules as a “blend” of deterministic rules allows us to derive quantitative results on the classification time and the classification time of previously studied rules. We introduce a new rule whose effect is to spread defects and to wash them out. This stochastic rule solves the problem with an arbitrary precision, that is, its quality of classification can be made arbitrarily high, though at the price of an increase of the convergence time. We experimentally demonstrate that this rule exhibits good scaling properties and that it attains qualities of classification never reached so far. 相似文献
The migration from circuit-switched networks to packet-switched networks necessitates the investigation of related issues such as service delivery, QoS, security, and service fraud and misuse. The latter can be seen as a combination of accounting and security aspects. In traditional telecommunication networks, fraud accounts for annual losses at an average of 3%–5% of the operators’ revenue and still increasing at a rate of more than 10% yearly. It is also expected that in VoIP networks, the situation will be worse due to the lack of strong built-in security mechanisms, and the use of open standards. This paper discusses the fraud problem in VoIP networks and evaluates the related available solutions. 相似文献
The paper proposes a controller scheme based on a priori identification for a C5 parallel robot. First we realize the identification of dynamic parameters of the robot using the Least Squares technique. Different data are used for this step of identification. The cross validation permitted to select and confirm the identified parameters. After, a control scheme (computed torque) is applied to control the C5 parallel robot. The functions of this control scheme are based on precedent identified parameters. In order to reduce the effect of the identification error, we have added a robustness term based on sliding mode technique. The stability of the system in closed loop is presented using the Lyapunov principle. Experimental results of identification and control are presented and show the effectiveness of our methodology. 相似文献
Energy is a scarce resource in Wireless Sensor Networks (WSN). Some studies show that more than 70% of energy is consumed in data transmission in WSN. Since most of the time, the sensed information is redundant due to geographically collocated sensors, most of this energy can be saved through data aggregation. Furthermore, data aggregation improves bandwidth usage and reduces collisions due to interference. Unfortunately, while aggregation eliminates redundancy, it makes data integrity verification more complicated since the received data is unique. 相似文献
The problem of reconstructing a pattern of an object from its approximate discrete orthogonal projections in a 2-dimensional grid, may have no solution because the inaccuracy in the measurements of the projections may generate an inconsistent problem. To attempt to overcome this difficulty, one seeks to reconstruct a pattern with projection values having possibly some bounded differences with the given projection values and minimizing the sum of the absolute differences.
This paper addresses the problem of reconstructing a pattern with a difference at most equal to +1 or −1 between each of its projection values and the corresponding given projection value. We deal with the case of patterns which have to be horizontally and vertically convex and the case of patterns which have to be moreover connected, the so-called convex polyominoes. We show that in both cases, the problem of reconstructing a pattern can be transformed into a Satisfiability (SAT) Problem. This is done in order to take advantage of the recent advances in the design of solvers for the SAT Problem. We show, experimentally, that by adding two important features to CSAT (an efficient SAT solver), optimal patterns can be found if there exist feasible ones. These two features are: first, a method that extracts in linear time an optimal pattern from a set of feasible patterns grouped in a generic pattern (obtaining a generic pattern may be exponential in the worst case) and second, a method that computes actively a lower bound of the sum of absolute differences that can be obtained from a partially defined pattern. This allows to prune the search tree if this lower bound exceeds the best sum of absolute differences found so far. 相似文献
Healthcare systems have made a dramatic shift towards ubiquitous monitoring in the recent past. The reasons for such a change have been ease of timely diagnosis, convenience and comfort of clinical treatments. Wireless Body Area Networks (WBANs) are mainly characterized by deployment of biomedical sensors around human body which transmit vital signs measurements about the health status of the patient. Unfortunately, the huge traffic load of clinical data and limited resources of biomedical sensors make the efficiency of long-term operations almost impossible. Therefore, it is necessary to make significant advances in sensor’s energy saving. Our idea is to reduce the activities of some sensors depending on the relevance between the data they measure and the diseases to detect. This paper shows how to extend the lifetime of medical WBANs by appropriately taking benefit of correlation between the knowledge about the disease and sensing data to drive the best scheduling of the medical sensors. For that, the theoretical framework of an economic approach, i.e., network utility maximization, is developed for sensor scheduling under operations cost constraint. It is shown that the compact subset of sensors can be found to provide necessary information for timely and correct diagnoses. Based on the theoretical framework, an algorithm combining sensor selection and information gain is then proposed. Simulation results show that the algorithm achieves high performance in terms of energy saving vs latency in disease detection. 相似文献