In this paper, we propose a source localization algorithm based on a sparse Fast Fourier Transform (FFT)-based feature extraction method and spatial sparsity. We represent the sound source positions as a sparse vector by discretely segmenting the space with a circular grid. The location vector is related to microphone measurements through a linear equation, which can be estimated at each microphone. For this linear dimensionality reduction, we have utilized a Compressive Sensing (CS) and two-level FFT-based feature extraction method which combines two sets of audio signal features and covers both short-time and long-time properties of the signal. The proposed feature extraction method leads to a sparse representation of audio signals. As a result, a significant reduction in the dimensionality of the signals is achieved. In comparison to the state-of-the-art methods, the proposed method improves the accuracy while the complexity is reduced in some cases. 相似文献
This paper investigates the use of time-adaptive self-organizing map (TASOM)-based active contour models (ACMs) for detecting the boundaries of the human eye sclera and tracking its movements in a sequence of images. The task begins with extracting the head boundary based on a skin-color model. Then the eye strip is located with an acceptable accuracy using a morphological method. Eye features such as the iris center or eye corners are detected through the iris edge information. TASOM-based ACM is used to extract the inner boundary of the eye. Finally, by tracking the changes in the neighborhood characteristics of the eye-boundary estimating neurons, the eyes are tracked effectively. The original TASOM algorithm is found to have some weaknesses in this application. These include formation of undesired twists in the neuron chain and holes in the boundary, lengthy chain of neurons, and low speed of the algorithm. These weaknesses are overcome by introducing a new method for finding the winning neuron, a new definition for unused neurons, and a new method of feature selection and application to the network. Experimental results show a very good performance for the proposed method in general and a better performance than that of the gradient vector field (GVF) snake-based method. 相似文献
This paper proposes an efficient parallel algorithm for computing Lagrange interpolation on k-ary n-cube networks. This is done using the fact that a k-ary n-cube can be decomposed into n link-disjoint Hamiltonian cycles. Using these n link-disjoint cycles, we interpolate Lagrange polynomial using full bandwidth of the employed network. Communication in the
main phase of the algorithm is based on an all-to-all broadcast algorithm on the n link-disjoint Hamiltonian cycles exploiting all network channels, and thus, resulting in high-efficiency in using network
resources. A performance evaluation of the proposed algorithm reveals an optimum speedup for a typical range of system parameters
used in current state-of-the-art implementations.
To improve the performance of embedded processors, an effective technique is collapsing critical computation subgraphs as
application-specific instruction set extensions and executing them on custom functional units. The problem with this approach
is the immense cost and the long times required to design a new processor for each application. As a solution to this issue,
we propose an adaptive extensible processor in which custom instructions (CIs) are generated and added after chip-fabrication.
To support this feature, custom functional units are replaced by a reconfigurable matrix of functional units (FUs). A systematic
quantitative approach is used for determining the appropriate structure of the reconfigurable functional unit (RFU). We also
introduce an integrated framework for generating mappable CIs on the RFU. Using this architecture, performance is improved
by up to 1.33, with an average improvement of 1.16, compared to a 4-issue in-order RISC processor. By partitioning the configuration
memory, detecting similar/subset CIs and merging small CIs, the size of the configuration memory is reduced by 40%. 相似文献
About 20 years ago, Markus and Robey noted that most research on IT impacts had been guided by deterministic perspectives and had neglected to use an emergent perspective, which could account for contradictory findings. They further observed that most research in this area had been carried out using variance theories at the expense of process theories. Finally, they suggested that more emphasis on multilevel theory building would likely improve empirical reliability. In this paper, we reiterate the observations and suggestions made by Markus and Robey on the causal structure of IT impact theories and carry out an analysis of empirical research published in four major IS journals, Management Information Systems Quarterly (MISQ), Information Systems Research (ISR), the European Journal of Information Systems (EJIS), and Information and Organization (I&O), to assess compliance with those recommendations. Our final sample consisted of 161 theory-driven articles, accounting for approximately 21% of all the empirical articles published in these journals. Our results first reveal that 91% of the studies in MISQ, ISR, and EJIS focused on deterministic theories, while 63% of those in I&O adopted an emergent perspective. Furthermore, 91% of the articles in MISQ, ISR, and EJIS adopted a variance model; this compares with 71% from I&O that applied a process model. Lastly, mixed levels of analysis were found in 14% of all the surveyed articles. Implications of these findings for future research are discussed. 相似文献
In this paper, we consider the online version of the following problem: partition a set of input points into subsets, each enclosable by a unit ball, so as to minimize the number of subsets used. In the one-dimensional case, we show that surprisingly the naïve upper bound of 2 on the competitive ratio can be beaten: we present a new randomized 15/8-competitive online algorithm. We also provide some lower bounds and an extension to higher dimensions. 相似文献
Strategic reasoning about business models is an integral part of service design. In fast moving markets, businesses must be able to recognize and respond strategically to disruptive change. They have to answer questions such as: what are the threats and opportunities in emerging technologies and innovations? How should they target customer groups? Who are their real competitors? How will competitive battles take shape? In this paper we define a strategic modeling framework to help understand and analyze the goals, intentions, roles, and the rationale behind the strategic actions in a business environment. This understanding is necessary in order to improve existing or design new services. The key component of the framework is a strategic business model ontology for representing and analyzing business models and strategies, using the i* agent and goal oriented methodology as a basis. The ontology introduces a strategy layer which reasons about alternative strategies that are realized in the operational layer. The framework is evaluated using a retroactive example of disruptive technology in the telecommunication services sector from the literature. 相似文献
We study the problem of stabilizing a distributed linear system on a subregion of its geometrical domain. We are concerned with two methods: the first approach enables us to characterize a stabilizing control via the steady state Riccati equation, and the second one is based on decomposing the state space into two suitable subspaces and studying the projections of the initial system onto such subspaces. The obtained results are performed through various examples. 相似文献
This paper deals with defining the concept of agent-based time delay margin and computing its value in multi-agent systems controlled by event-triggered based controllers. The agent-based time delay margin specifying the time delay tolerance of each agent for ensuring consensus in event-triggered controlled multi-agent systems can be considered as complementary for the concept of (network) time delay margin, which has been previously introduced in some literature. In this paper, an event-triggered control method for achieving consensus in multi-agent systems with time delay is considered. It is shown that the Zeno behavior is excluded by applying this method. Then, in a multi-agent system controlled by the considered event-triggered method, the concept of agent-based time delay margin in the presence of a fixed network delay is defined. Moreover, an algorithm for computing the value of the time delay margin for each agent is proposed. Numerical simulation results are also provided to verify the obtained theoretical results. 相似文献
The need for suitable and cost-effective technologies rise with the growth of the internet of things (IoT) applications. These aim at handling voluminous data transmission in addition to minimum energy and latency cost constraints. LoRa networks are recommended for applications in confined spaces, long ranges, and less battery consumption requirements. However, the end devices in these networks communicate to all gateways in their ranges, thereby expediting energy unproductively in redundant transmissions. In our article, we explore the possibilities of whether LoRa networks could employ the advantages of clustering and propose two algorithms, path-based and data-centric, for such networks. We suggest that LoRaWAN technology with clustering can be apt for long-range, low power consumption IoT applications in the future. We study the impact of network density, node range, and cluster range on the energy consumption in data transmissions. The algorithms are compared with the inherent star-based communication of LoRa networks based on energy consumed, and our results show that, for dense deployments, clustering becomes advantageous.