Journal articles on the topic 'Low computational complexity algorithms'

To see the other types of publications on this topic, follow the link: Low computational complexity algorithms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Low computational complexity algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Xinhe, Wenbo Lv, and Haoran Tan. "Low-Complexity GSM Detection Based on Maximum Ratio Combining." Future Internet 14, no. 5 (May 23, 2022): 159. http://dx.doi.org/10.3390/fi14050159.

Full text
Abstract:
Generalized spatial modulation (GSM) technology is an extension of spatial modulation (SM) technology, and one of its main advantages is to further improve band efficiency. However, the multiple active antennas for transmission also brings the demodulation difficulties at the receiver. To solve the problem of high computational complexity of the optimal maximum likelihood (ML) detection, two sub-optimal detection algorithms are proposed through reducing the number of transmit antenna combinations (TACs) detected at the receiver. One is the maximum ratio combining detection algorithm based on repetitive sorting strategy, termed as (MRC-RS), which uses MRC repetitive sorting strategy to select the most likely TACs in detection. The other is the maximum ratio combining detection algorithm, which is based on the iterative idea of the orthogonal matching pursuit, termed the MRC-MP algorithm. The MRC-MP algorithm reduces the number of TACs through finite iterations to reduce the computational complexity. For M-QAM constellation, a hard-limited maximum likelihood (HLML) detection algorithm is introduced to calculate the modulation symbol. For the M-PSK constellation, a low-complexity maximum likelihood (LCML) algorithm is introduced to calculate the modulation symbol. The computational complexity of these two algorithms for calculating the modulation symbol are independent of modulation order. The simulation results show that for GSM systems with a large number of TACs, the proposed two algorithms not only achieve almost the same bit error rate (BER) performance as the ML algorithm, but also can greatly reduce the computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
2

Kharlampovich, Olga, and Alina Vdovina. "Low complexity algorithms in knot theory." International Journal of Algebra and Computation 29, no. 02 (March 2019): 245–62. http://dx.doi.org/10.1142/s0218196718500698.

Full text
Abstract:
Agol, Haas and Thurston showed that the problem of determining a bound on the genus of a knot in a 3-manifold, is NP-complete. This shows that (unless P[Formula: see text]NP) the genus problem has high computational complexity even for knots in a 3-manifold. We initiate the study of classes of knots where the genus problem and even the equivalence problem have very low computational complexity. We show that the genus problem for alternating knots with n crossings has linear time complexity and is in Logspace[Formula: see text]. Alternating knots with some additional combinatorial structure will be referred to as standard. As expected, almost all alternating knots of a given genus are standard. We show that the genus problem for these knots belongs to [Formula: see text] circuit complexity class. We also show, that the equivalence problem for such knots with [Formula: see text] crossings has time complexity [Formula: see text] and is in Logspace[Formula: see text] and [Formula: see text] complexity classes.
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Haihua, Jialiang Hu, Hui Tian, Shibao Li, Jianhang Liu, and Masakiyo Suzuki. "A Low-Complexity GA-WSF Algorithm for Narrow-Band DOA Estimation." International Journal of Antennas and Propagation 2018 (November 4, 2018): 1–6. http://dx.doi.org/10.1155/2018/7175653.

Full text
Abstract:
This paper proposes a low-complexity estimation algorithm for weighted subspace fitting (WSF) based on the Genetic Algorithm (GA) in the problem of narrow-band direction-of-arrival (DOA) finding. Among various solving techniques for DOA, WSF is one of the highest estimation accuracy algorithms. However, its criteria is a multimodal nonlinear multivariate optimization problem. As a result, the computational complexity of WSF is very high, which prevents its application to real systems. The Genetic Algorithm (GA) is considered as an effective algorithm for finding the global solution of WSF. However, conventional GA usually needs a big population size to cover the whole searching space and a large number of generations for convergence, which means that the computational complexity is still high. To reduce the computational complexity of WSF, this paper proposes an improved Genetic algorithm. Firstly a hypothesis technique is used for a rough DOA estimation for WSF. Then, a dynamic initialization space is formed around this value with an empirical function. Within this space, a smaller population size and smaller amount of generations are required. Consequently, the computational complexity is reduced. Simulation results show the efficiency of the proposed algorithm in comparison to many existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
4

Xue, Xiaomei, Zhengquan Li, Yongqiang Man, Song Xing, Yang Liu, Baolong Li, and Qiong Wu. "Improved Massive MIMO RZF Precoding Algorithm Based on Truncated Kapteyn Series Expansion." Information 10, no. 4 (April 11, 2019): 136. http://dx.doi.org/10.3390/info10040136.

Full text
Abstract:
In order to reduce the computational complexity of the inverse matrix in the regularized zero-forcing (RZF) precoding algorithm, this paper expands and approximates the inverse matrix based on the truncated Kapteyn series expansion and the corresponding low-complexity RZF precoding algorithm is obtained. In addition, the expansion coefficients of the truncated Kapteyn series in our proposed algorithm are optimized, leading to further improvement of the convergence speed of the precoding algorithm under the premise of the same computational complexity as the traditional RZF precoding. Moreover, the computational complexity and the downlink channel performance in terms of the average achievable rate of the proposed RZF precoding algorithm and other RZF precoding algorithms with typical truncated series expansion approaches are analyzed, and further evaluated by numerical simulations in a large-scale single-cell multiple-input-multiple-output (MIMO) system. Simulation results show that the proposed improved RZF precoding algorithm based on the truncated Kapteyn series expansion performs better than other compared algorithms while keeping low computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
5

Manis, George, Md Aktaruzzaman, and Roberto Sassi. "Low Computational Cost for Sample Entropy." Entropy 20, no. 1 (January 13, 2018): 61. http://dx.doi.org/10.3390/e20010061.

Full text
Abstract:
Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part is the similarity check between points in m dimensional space. In this paper, we propose new algorithms or extend already proposed ones, aiming to compute Sample Entropy quickly. All algorithms return exactly the same value for Sample Entropy, and no approximation techniques are used. We compare and evaluate them using cardiac inter-beat (RR) time series. We investigate three algorithms. The first one is an extension of the k d -trees algorithm, customized for Sample Entropy. The second one is an extension of an algorithm initially proposed for Approximate Entropy, again customized for Sample Entropy, but also improved to present even faster results. The last one is a completely new algorithm, presenting the fastest execution times for specific values of m, r, time series length, and signal characteristics. These algorithms are compared with the straightforward implementation, directly resulting from the definition of Sample Entropy, in order to give a clear image of the speedups achieved. All algorithms assume the classical approach to the metric, in which the maximum norm is used. The key idea of the two last suggested algorithms is to avoid unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time, making them the fastest algorithms available today for the computation of Sample Entropy.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Xinhe, Yuehua Zhang, Chang Liu, and Hanzhong Jia. "Low-Complexity Detection Algorithms for Spatial Modulation MIMO Systems." Journal of Electrical and Computer Engineering 2018 (November 15, 2018): 1–7. http://dx.doi.org/10.1155/2018/4034625.

Full text
Abstract:
In this paper, the authors propose three low-complexity detection schemes for spatial modulation (SM) systems based on the modified beam search (MBS) detection. The MBS detector, which splits the search tree into some subtrees, can reduce the computational complexity by decreasing the nodes retained in each layer. However, the MBS detector does not take into account the effect of subtree search order on computational complexity, and it does not consider the effect of layers search order on the bit-error-rate (BER) performance. The ost-MBS detector starts the search from the subtree where the optimal solution is most likely to be located, which can reduce total searches of nodes in the subsequent subtrees. Thus, it can decrease the computational complexity. When the number of the retained nodes is fixed, which nodes are retained is very important. That is, the different search orders of layers have a direct influence on BER. Based on this, we propose the oy-MBS detector. The ost-oy-MBS detector combines the detection order of ost-MBS and oy-MBS together. The algorithm analysis and experimental results show that the proposed detectors outstrip MBS with respect to the BER performance and the computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
7

Rampa, Vittorio. "Design and Implementation of a Low Complexity Multiuser Detector for Hybrid CDMA Systems." Journal of Communications Software and Systems 1, no. 1 (April 6, 2017): 42. http://dx.doi.org/10.24138/jcomss.v1i1.316.

Full text
Abstract:
In hybrid CDMA systems, multiuser detection (MUD) algorithms are adopted at the base station to reduce both multiple access and inter-symbol interference by exploitingspace-time (ST) signal processing techniques. Linear ST-MUD algorithms solve a linear problem where the system matrix has a block-Toeplitz shape. While exact inversion techniques impose an intolerable computational load, reduced complexity algorithms may be efficiently employed even if they show suboptimal behavior introducing performance degradation and nearfar effects. The block-Fourier MUD algorithm is generally considered the most effective one. However, the block-Bareiss MUD algorithm, that has been recently reintroduced, shows also good performance and low computational complexity comparingfavorably with the block-Fourier one. In this paper, both MUD algorithms will be compared, along with other well known ones, in terms of complexity, performance figures, hardware feasibility and implementation issues. Finally a short hardware description of the block-Bareiss and block-Fourier algorithms will be presented along with the FPGA (Field Programmable Gate Array) implementation of the block-Fourier using standard VHDL (VHSIC Hardware Description Language) design.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Yuhuan, Jianguo Li, Neng Ye, and Xiangyuan Bu. "Novel Low Complexity BP Decoding Algorithms for Polar Codes: Simplifying on Non-Linear Operations." Electronics 11, no. 1 (December 28, 2021): 93. http://dx.doi.org/10.3390/electronics11010093.

Full text
Abstract:
The parallel nature of the belief propagation (BP) decoding algorithm for polar codes opens up a real possibility of high throughput and low decoding latency during hardware implementation. To address the problem that the BP decoding algorithm introduces high-complexity non-linear operations in the iterative messages update process, this paper proposes to simplify these operations and develops two novel low complexity BP decoding algorithms, namely, exponential BP (Exp-BP) decoding algorithm and quantization function BP (QF-BP) decoding algorithm. The proposed algorithms simplify the compound hyperbolic tangent function by using probability distribution fitting techniques. Specifically, the Exp-BP algorithm simplifies two types of non-linear operations into single non-linear operation using the piece-wise exponential model function, which can approximate the hyperbolic tangent function in the updating formula. The QF-BP algorithm eliminates non-linear operations using the non-uniform quantization in the updating formula, which is effective in reducing computational complexity. According to the simulation results, the proposed algorithms can reduce the computational complexity up to 50% in each iteration with a loss of less than 0.1 dB compared with the BP decoding algorithm, which can facilitate the hardware implementation.
APA, Harvard, Vancouver, ISO, and other styles
9

Otunniyi, Temidayo O., and Hermanus C. Myburgh. "Low-Complexity Filter for Software-Defined Radio by Modulated Interpolated Coefficient Decimated Filter in a Hybrid Farrow." Sensors 22, no. 3 (February 3, 2022): 1164. http://dx.doi.org/10.3390/s22031164.

Full text
Abstract:
Realising a low-complexity Farrow channelisation algorithm for multi-standard receivers in software-defined radio is a challenging task. A Farrow filter operates best at low frequencies while its performance degrades towards the Nyquist region. This makes wideband channelisation in software-defined radio a challenging task with high computational complexity. In this paper, a hybrid Farrow algorithm that combines a modulated Farrow filter with a frequency response interpolated coefficient decimated masking filter is proposed for the design of a novel filter with low computational complexity. A design example shows that the HFarrow filter bank achieved multiplier reduction of 50%, 70% and 64%, respectively, in comparison with non-uniform modulated discrete Fourier transform (NU MDFT FB), coefficient decimated filter bank (CD FB) and interpolated coefficient decimated (ICDM) filter algorithms. The HFarrow filter bank is able to provide the same number of sub-band channels as other algorithms such as non-uniform modulated discrete Fourier transform (NU MDFT FB), coefficient decimated filter bank (CD FB) and interpolated coefficient decimated (ICDM) filter algorithms, but with less computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Haihua, Haoran Li, Mingyang Yang, Changbo Xiang, and Masakiyo Suzuki. "General Improvements of Heuristic Algorithms for Low Complexity DOA Estimation." International Journal of Antennas and Propagation 2019 (December 11, 2019): 1–9. http://dx.doi.org/10.1155/2019/3858794.

Full text
Abstract:
Heuristic algorithms are considered to be effective approaches for super-resolution DOA estimations such as Deterministic Maximum Likelihood (DML), Stochastic Maximum Likelihood (SML), and Weighted Subspace Fitting (WSF) which are involved in nonlinear multi-dimensional optimization. Traditional heuristic algorithms usually need a large number of particles and iteration times. As a result, the computational complexity is still a bit high, which prevents the application of these super-resolution techniques in real systems. To reduce the computational complexity of heuristic algorithms for these super-resolution techniques of DOA, this paper proposes three general improvements of heuristic algorithms, i.e., the optimization of the initialization space, the optimization of evolutionary strategies, and the usage of parallel computing techniques. Simulation results show that the computational complexity can be greatly reduced while these improvements are used.
APA, Harvard, Vancouver, ISO, and other styles
11

Kim, Un Seob, and Myung Hoon Sunwoo. "New Frame Rate Up-Conversion Algorithms With Low Computational Complexity." IEEE Transactions on Circuits and Systems for Video Technology 24, no. 3 (March 2014): 384–93. http://dx.doi.org/10.1109/tcsvt.2013.2278142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Kountchev, Roumen K., Rumen P. Mironov, and Roumiana A. Kountcheva. "Hierarchical Cubical Tensor Decomposition through Low Complexity Orthogonal Transforms." Symmetry 12, no. 5 (May 25, 2020): 864. http://dx.doi.org/10.3390/sym12050864.

Full text
Abstract:
In this work, new approaches are proposed for the 3D decomposition of a cubical tensor of size N × N × N for N = 2n through hierarchical deterministic orthogonal transforms with low computational complexity, whose kernels are based on the Walsh-Hadamard Transform (WHT) and the Complex Hadamard Transform (CHT). On the basis of the symmetrical properties of the real and complex Walsh-Hadamard matrices are developed fast computational algorithms whose computational complexity is compared with that of the famous deterministic transforms: the 3D Fast Fourier Transform, the 3D Discrete Wavelet Transform and the statistical Hierarchical Tucker decomposition. The comparison results show the lower computational complexity of the offered algorithms. Additionally, they ensure the high energy concentration of the original tensor into a small number of coefficients of the so calculated transformed spectrum tensor. The main advantage of the proposed algorithms is the reduction of the needed calculations due to the low number of hierarchical levels compared to the significant number of iterations needed to achieve the required decomposition accuracy based on the statistical methods. The choice of the 3D hierarchical decomposition is defined by the requirements and limitations related to the corresponding application area.
APA, Harvard, Vancouver, ISO, and other styles
13

Shin, JaeWook, and Jaegeol Cho. "Noise-Robust Heart Rate Estimation Algorithm from Photoplethysmography Signal with Low Computational Complexity." Journal of Healthcare Engineering 2019 (May 21, 2019): 1–7. http://dx.doi.org/10.1155/2019/6283279.

Full text
Abstract:
This paper introduces a noise-robust HR estimation algorithm using wrist-type PPG signals that consist of preprocessing block, motion artifact reduction block, and frequency tracking block. The proposed algorithm has not only robustness for motion noise but also low computational complexity. The proposed algorithm was tested on a data set of 12 subjects and recorded during treadmill exercise in order to verify and compare with other existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
14

Lv, Ming Xia, Yan Kun Lai, and Dong Tang. "Performance Analysis of Low Complexity Multiuser Scheduling Algorithms in MIMO System." Applied Mechanics and Materials 392 (September 2013): 867–71. http://dx.doi.org/10.4028/www.scientific.net/amm.392.867.

Full text
Abstract:
The total throughput of the communication system can be maximized by allocating the common radio resource to the user or the user group having the best channel quality at a given time and the multiuser diversity gain can be obtained when multiple users share the same channel at one time. The object to select the users is to select the users with the maximum sum capacity. As for a scheduling algorithm, exhaustive algorithm can get the largest capability of the system by multi-user scheduling. However, this algorithm is quite complex hence the cost of operation to a base station has substantial increased. We compare the multiuser performance of two fast user selection algorithms with low complexity in MIMO-MRC systems with co-channel interferences. From the simulation results, these two algorithms not only decrease the computational complexity of the scheduling algorithm but also retain large capability of the MIMO system.
APA, Harvard, Vancouver, ISO, and other styles
15

Luts, Yа, and V. Luts. "About the Development of a High-Speed Simplified Image Codec." Cybernetics and Computer Technologies, no. 1 (March 30, 2021): 61–66. http://dx.doi.org/10.34229/2707-451x.21.1.6.

Full text
Abstract:
In order to develop a high-speed simplified image codec, an analysis of the influence of known image compression algorithms and other parameters on performance was done. The relevance and expediency of developing a high-speed simplified image codec for the Internet of Things in order to increase the level of autonomy of IoT devices, reduce the cost of construction and dissemination of IoT infrastructure were substantiated. The efficiency coefficient of image compression algorithms was introduced, which is determined by the ratio between the computational complexity of the algorithms and their contribution to the final result. Simplification and reduction of the number of algorithms for predicting pixel values ​​were proposed and substantiated, because at this stage a significant number of computational operations is added by the procedure of comparing different prediction algorithms with each other. It is proposed to use only one block integer transformation with fast low complexity algorithms of calculating, which will significantly reduce the complexity of the block transformation stage, including due to the lack of high computational complexity of the algorithm for comparing the quality of block transformations. At the stage of entropy coding, it is also proposed to use simplified algorithms, because the contribution of this stage to the overall result in the general background is quite small, and the computational complexity is high (50 – 70 % of all calculations). A new algorithm for progressive image transfer was proposed - the transfer of a reduced image followed by the transfer of the original image on demand. The considered approaches and algorithms for the development of high-speed simplified image codec can be applied to further development of high-speed simplified video codec. Keywords: computational complexity, fast transforms, computational efficiency, progressive data transfer, intra-prediction algorithms, simplified image codec, IoT.
APA, Harvard, Vancouver, ISO, and other styles
16

Garcia, Luís P. F., Adriano Rivolli, Edesio Alcoba, Ana C. Lorena, and André C. P. L. F. de Carvalho. "Boosting meta-learning with simulated data complexity measures." Intelligent Data Analysis 24, no. 5 (September 30, 2020): 1011–28. http://dx.doi.org/10.3233/ida-194803.

Full text
Abstract:
Meta-Learning has been largely used over the last years to support the recommendation of the most suitable machine learning algorithm(s) and hyperparameters for new datasets. Traditionally, a meta-base is created containing meta-features extracted from several datasets along with the performance of a pool of machine learning algorithms when applied to these datasets. The meta-features must describe essential aspects of the dataset and distinguish different problems and solutions. However, if one wants the use of Meta-Learning to be computationally efficient, the extraction of the meta-feature values should also show a low computational cost, considering a trade-off between the time spent to run all the algorithms and the time required to extract the meta-features. One class of measures with successful results in the characterization of classification datasets is concerned with estimating the underlying complexity of the classification problem. These data complexity measures take into account the overlap between classes imposed by the feature values, the separability of the classes and distribution of the instances within the classes. However, the extraction of these measures from datasets usually presents a high computational cost. In this paper, we propose an empirical approach designed to decrease the computational cost of computing the data complexity measures, while still keeping their descriptive ability. The proposal consists of a novel Meta-Learning system able to predict the values of the data complexity measures for a dataset by using simpler meta-features as input. In an extensive set of experiments, we show that the predictive performance achieved by Meta-Learning systems which use the predicted data complexity measures is similar to the performance obtained using the original data complexity measures, but the computational cost involved in their computation is significantly reduced.
APA, Harvard, Vancouver, ISO, and other styles
17

Ramdane, Mohamed Amine, Ahmed Benallal, Mountassar Maamoun, and Islam Hassani. "Partial Update Simplified Fast Transversal Filter Algorithms for Acoustic Echo Cancellation." Traitement du Signal 39, no. 1 (February 28, 2022): 11–19. http://dx.doi.org/10.18280/ts.390102.

Full text
Abstract:
Robust algorithms applied in Acoustic Echo Cancellation systems present an excessive calculation load that has to be minimized. In the present paper, we propose two different low complexity fast least squares algorithms, called Partial Update Simplified Fast Transversal Filter (PU-SMFTF) algorithm and Reduced Partial Update Simplified Fast Transversal Filter (RPU-SMFTF) algorithm. The first algorithm reduces the computational complexity in both filtering and prediction parts using the M-Max method for coefficients selection. Moreover, the second algorithm applies the partial update technique on the filtering part, joined to the P-size forward predictor, to get more complexity reduction. The obtained results show a computational complexity reduction from (7L+8) to (L+6M+8) and from (7L+8) to (L+M+4P+17) for the PU-SMFTF algorithm and RPU-SMFTF algorithm, respectively compared to the original Simplified Fast Transversal Filter (SMFTF). Furthermore, experiments picked out in the context of acoustic echo cancellation, have demonstrated that the proposed algorithms provide better convergence speed, good tracking capability and steady-state performances than the NLMS and SMFTF algorithms.
APA, Harvard, Vancouver, ISO, and other styles
18

Długosz, Zofia, Michał Rajewski, Rafał Długosz, and Tomasz Talaśka. "A Novel, Low Computational Complexity, Parallel Swarm Algorithm for Application in Low-Energy Devices." Sensors 21, no. 24 (December 17, 2021): 8449. http://dx.doi.org/10.3390/s21248449.

Full text
Abstract:
In this work, we propose a novel metaheuristic algorithm that evolved from a conventional particle swarm optimization (PSO) algorithm for application in miniaturized devices and systems that require low energy consumption. The modifications allowed us to substantially reduce the computational complexity of the PSO algorithm, translating to reduced energy consumption in hardware implementation. This is a paramount feature in the devices used, for example, in wireless sensor networks (WSNs) or wireless body area sensors (WBANs), in which particular devices have limited access to a power source. Various swarm algorithms are widely used in solving problems that require searching for an optimal solution, with simultaneous occurrence of a different number of sub-optimal solutions. This makes the hardware implementation worthy of consideration. However, hardware implementation of the conventional PSO algorithm is challenging task. One of the issues is an efficient implementation of the randomization function. In this work, we propose novel methods to work around this problem. In the proposed approach, we replaced the block responsible for generating random values using deterministic methods, which differentiate the trajectories of particular particles in the swarm. Comprehensive investigations in the software model of the modified algorithm have shown that its performance is comparable with or even surpasses the conventional PSO algorithm in a multitude of scenarios. The proposed algorithm was tested with numerous fitness functions to verify its flexibility and adaptiveness to different problems. The paper also presents the hardware implementation of the selected blocks that modify the algorithm. In particular, we focused on reducing the hardware complexity, achieving high-speed operation, while reducing energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
19

Walczyk, Cezary J., Leonid V. Moroz, and Jan L. Cieśliński. "Improving the Accuracy of the Fast Inverse Square Root by Modifying Newton–Raphson Corrections." Entropy 23, no. 1 (January 9, 2021): 86. http://dx.doi.org/10.3390/e23010086.

Full text
Abstract:
Direct computation of functions using low-complexity algorithms can be applied both for hardware constraints and in systems where storage capacity is a challenge for processing a large volume of data. We present improved algorithms for fast calculation of the inverse square root function for single-precision and double-precision floating-point numbers. Higher precision is also discussed. Our approach consists in minimizing maximal errors by finding optimal magic constants and modifying the Newton–Raphson coefficients. The obtained algorithms are much more accurate than the original fast inverse square root algorithm and have similar very low computational costs.
APA, Harvard, Vancouver, ISO, and other styles
20

Walczyk, Cezary J., Leonid V. Moroz, and Jan L. Cieśliński. "Improving the Accuracy of the Fast Inverse Square Root by Modifying Newton–Raphson Corrections." Entropy 23, no. 1 (January 9, 2021): 86. http://dx.doi.org/10.3390/e23010086.

Full text
Abstract:
Direct computation of functions using low-complexity algorithms can be applied both for hardware constraints and in systems where storage capacity is a challenge for processing a large volume of data. We present improved algorithms for fast calculation of the inverse square root function for single-precision and double-precision floating-point numbers. Higher precision is also discussed. Our approach consists in minimizing maximal errors by finding optimal magic constants and modifying the Newton–Raphson coefficients. The obtained algorithms are much more accurate than the original fast inverse square root algorithm and have similar very low computational costs.
APA, Harvard, Vancouver, ISO, and other styles
21

CHINNASARN, KRISANA, CHIDCHANOK LURSINSAP, and VASILE PALADE. "BLIND SEPARATION OF MIXED KURTOSIS SIGNED SIGNALS USING PARTIAL OBSERVATIONS AND LOW COMPLEXITY ACTIVATION FUNCTIONS." International Journal of Computational Intelligence and Applications 04, no. 02 (June 2004): 207–23. http://dx.doi.org/10.1142/s1469026804001239.

Full text
Abstract:
Although several highly accurate blind source separation algorithms have already been proposed in the literature, these algorithms must store and process the whole data set which may be tremendous in some situations. This makes the blind source separation infeasible and not realisable on VLSI level, due to a large memory requirement and costly computation. This paper concerns the algorithms for solving the problem of tremendous data sets and high computational complexity, so that the algorithms could be run on-line and implementable on VLSI level with acceptable accuracy. Our approach is to partition the observed signals into several parts and to extract the partitioned observations with a simple activation function performing only the "shift-and-add" micro-operation. No division, multiplication and exponential operations are needed. Moreover, obtaining an optimal initial de-mixing weight matrix for speeding up the separating time will be also presented. The proposed algorithm is tested on some benchmarks available online. The experimental results show that our solution provides comparable efficiency with other approaches, but lower space and time complexity.
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Shanshan, Zhicai Shi, Fei Wu, Changzhi Wang, Jin Liu, and Jiwei Chen. "Improved 3-D Indoor Positioning Based on Particle Swarm Optimization and the Chan Method." Information 9, no. 9 (August 22, 2018): 208. http://dx.doi.org/10.3390/info9090208.

Full text
Abstract:
Time of arrival (TOA) measurement is a promising method for target positioning based on a set of nodes with known positions, with high accuracy and low computational complexity. However, most positioning methods based on TOA (such as least squares estimation, maximum-likelihood, and Chan, etc.) cannot provide desirable accuracy while maintaining high computational efficiency in the case of a non-line of sight (NLOS) path between base stations and user terminals. Therefore, in this paper, we proposed a creative 3-D positioning system based on particle swarm optimization (PSO) and an improved Chan algorithm to greatly improve the positioning accuracy while decreasing the computation time. In the system, PSO is used to estimate the initial location of the target, which can effectively eliminate the NLOS error. Based on the initial location, the improved Chan algorithm performs iterative computations quickly to obtain the final exact location of the target. In addition, the proposed methods will have computational benefits in dealing with the large-scale base station positioning problems while has highly positioning accuracy and lower computational complexity. The experimental results demonstrated that our algorithm has the best time efficiency and good practicability among stat-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
23

Park, Jangsoo, Jongseok Lee, and Donggyu Sim. "Low-complexity CNN with 1D and 2D filters for super-resolution." Journal of Real-Time Image Processing 17, no. 6 (September 21, 2020): 2065–76. http://dx.doi.org/10.1007/s11554-020-01019-1.

Full text
Abstract:
Abstract This paper proposes a low-complexity convolutional neural network (CNN) for super-resolution (SR). The proposed deep-learning model for SR has two layers to deal with horizontal, vertical, and diagonal visual information. The front-end layer extracts the horizontal and vertical high-frequency signals using a CNN with one-dimensional (1D) filters. In the high-resolution image-restoration layer, the high-frequency signals in the diagonal directions are processed by additional two-dimensional (2D) filters. The proposed model consists of 1D and 2D filters, and as a result, we can reduce the computational complexity of the existing SR algorithms, with negligible visual loss. The computational complexity of the proposed algorithm is 71.37%, 61.82%, and 50.78% lower in CPU, TPU, and GPU than the very-deep SR (VDSR) algorithm, with a peak signal-to-noise ratio loss of 0.49 dB.
APA, Harvard, Vancouver, ISO, and other styles
24

Alemaishat, Saraereh, Khan, Affes, Li, and Lee. "An Efficient Precoding Scheme for Millimeter-Wave Massive MIMO Systems." Electronics 8, no. 9 (August 24, 2019): 927. http://dx.doi.org/10.3390/electronics8090927.

Full text
Abstract:
Aiming at the problem of high computational complexity due to a large number of antennas deployed in mmWave massive multiple-input multiple-output (MIMO) communication systems, this paper proposes an efficient algorithm for optimizing beam control vectors with low computational complexity based on codebooks for millimeter-wave massive MIMO systems with split sub-arrays hybrid beamforming architecture. A bidirectional method is adopted on the beam control vector of each antenna sub-array both at the transmitter and receiver, which utilizes the idea of interference alignment (IA) and alternating optimization. The simulation results show that the proposed algorithm has low computational complexity, fast convergence, and improved spectral efficiency as compared with the state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Jinhui, Chengshi Zheng, Fangjie Zhang, and Xiaodong Li. "A Low-Complexity Volterra Filtered-Error LMS Algorithm with a Kronecker Product Decomposition." Applied Sciences 11, no. 20 (October 15, 2021): 9637. http://dx.doi.org/10.3390/app11209637.

Full text
Abstract:
Nonlinear active control is very important in many practical applications. Many well-known nonlinear active noise control algorithms may suffer from high computational complexity and low convergence speed, especially in the nonlinear secondary path case. Thus, it is still an actively researched topic for reducing complexity and improving the convergence rate. This paper presents a low-complexity Volterra filtered-error least mean square algorithm when taking a decomposable Volterra model into account for active control of nonlinear noise processes, which is referred as DVMFELMS. The computational complexity analysis shows that the proposed DVMFELMS algorithm can significantly reduce the nonlinear active noise control system’s complexity. The simulation results further show that the proposed algorithm can achieve promising performance compared with the Volterra-based FELMS algorithm and other state-of-the-art nonlinear filters, while the decomposable error of the Volterra kernel may be introduced inevitably. Moreover, the proposed DVMFELMS algorithm shows a better convergence rate in the broadband primary noise case due to fewer parameters used in each sub-filter.
APA, Harvard, Vancouver, ISO, and other styles
26

Gong, Faming, Haihua Chen, Shibao Li, Jianhang Liu, Zhaozhi Gu, and Masakiyo Suzuki. "A Low Computational Complexity SML Estimation Algorithm of DOA for Wireless Sensor Networks." International Journal of Distributed Sensor Networks 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/352012.

Full text
Abstract:
We address the problem of DOA estimation in positioning of nodes in wireless sensor networks. The Stochastic Maximum Likelihood (SML) algorithm is adopted in this paper. The SML algorithm is well-known for its high resolution of DOA estimation. However, its computational complexity is very high because multidimensional nonlinear optimization problem is usually involved. To reduce the computational complexity of SML estimation, we do the following work. (1) We point out the problems of conventional SML criterion and explain why and how these problems happen. (2) A local AM search method is proposed which could be used to find the local solution near/around the initial value. (3) We propose an algorithm which uses the local AM search method together with the estimation of DML or MUSIC as initial value to find the solution of SML. Simulation results are shown to demonstrate the effectiveness and efficiency of the proposed algorithms. In particular, the algorithm which uses the local AM method and estimation of MUSIC as initial value has much higher resolution and comparable computational complexity to MUSIC.
APA, Harvard, Vancouver, ISO, and other styles
27

Bipin, C., C. V. Rao, P. V. Sridevi, S. Jayabharathi, and B. G. Krishna. "A MULTI THREADED FEATURE EXTRACTION TOOL FOR SAR IMAGES USING OPEN SOURCE SOFTWARE LIBRARIES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-5 (November 19, 2018): 155–60. http://dx.doi.org/10.5194/isprs-archives-xlii-5-155-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> In this paper, we propose a software architecture for a feature extraction tool which is suitable for automatic extraction of sparse features from large remote sensing data capable of using higher order algorithms (computational complexity greater than <i>O</i>(<i>n</i>)). Many features like roads, water bodies, buildings etc in remote-sensing data are sparse in nature. Remote-sensing deals with a large volume of data usually not manageable fully in the primary memory of typical workstations. For these reason algorithms with higher computational complexity is not used for feature extraction from remote sensing images. A good number of remote sensing applications algorithms are based on formulating a representative index typically using a kernel function which is having linear or less computational complexity (less than or equal to <i>O</i>(<i>n</i>)). This approach makes it possible to complete the operation in deterministic time and memory.</p><p>Feature extraction from Synthetic Aparture Radar (SAR) images requires more computationally intensive algorithm due to less spectral information and high noise. Higher Order algorithms like Fast Fourier Transform (FFT), Gray Level Co-Occurrence Matrix (GLCM), wavelet, curvelet etc based algorithms are not preferred in automatic feature extraction from remote sensing images due to their higher order of computational complexity. They are often used in small subsets or in association with a database where location and maximum extent of the features are stored beforehand. In this case, only characterization of the feature is carried out in the data.</p><p>In this paper, we demonstrate a system architecture that can overcome the shortcomings of both these approaches in a multi-threaded platform. The feature extraction problem is divided into a low complexity with less accuracy followed by a computationally complex algorithm in an augmented space. The sparse nature of features gives the flexibility to evaluate features in Region Of Interest (ROI)s. Each operation is carried out in multiple threads to minimize the latency of the algorithm. The computationally intensive algorithm evaluates on a ROI provided by the low complexity operation. The system also decouples complex operations using multi-threading.</p><p>The system is a customized solution developed completely in python using different open source software libraries. This approach has made it possible to carry out automatic feature extraction from Large SAR data. The architecture was tested and found giving promising results for extraction of inland water layers and dark features in ocean surface from SAR data.</p>
APA, Harvard, Vancouver, ISO, and other styles
28

Ghassemi, A., and T. A. Gulliver. "Low-Complexity Distortionless Techniques for Peak Power Reduction in OFDM Communication Systems." Journal of Computer Networks and Communications 2012 (2012): 1–13. http://dx.doi.org/10.1155/2012/929763.

Full text
Abstract:
A high peak-to-average power ratio (PAPR) is one of the major drawbacks to using orthogonal frequency division multiplexing (OFDM) modulation. The three most effective distortionless techniques for PAPR reduction are partial transmit sequence (PTS), selective mapping (SLM), and tone reservation (TR). However, the high computational complexity due to the inverse discrete Fourier transform (IDFT) is a problem with these approaches. Implementation of these techniques typically employ direct computation of the IDFT, which is not the most efficient solution. In this paper, we consider the development and performance analysis of these distortionless techniques in conjunction with low-complexity IFFT algorithms to reduce the PAPR of the OFDM signal. Recently, proposed IFFT-based techniques are shown to substantially reduce the computational complexity and improve PAPR performance.
APA, Harvard, Vancouver, ISO, and other styles
29

Ang, Li-Minn, Kah Phooi Seng, and Christopher Wing Hong Ngau. "Biologically Inspired Components in Embedded Vision Systems." International Journal of Systems Biology and Biomedical Technologies 3, no. 1 (January 2015): 39–72. http://dx.doi.org/10.4018/ijsbbt.2015010103.

Full text
Abstract:
Biological vision components like visual attention (VA) algorithms aim to mimic the mechanism of the human vision system. Often VA algorithms are complex and require high computational and memory requirements to be realized. In biologically-inspired vision and embedded systems, the computational capacity and memory resources are of a primary concern. This paper presents a discussion for implementing VA algorithms in embedded vision systems in a resource constrained environment. The authors survey various types of VA algorithms and identify potential techniques which can be implemented in embedded vision systems. Then, they propose a low complexity and low memory VA model based on a well-established mainstream VA model. The proposed model addresses critical factors in terms of algorithm complexity, memory requirements, computational speed, and salience prediction performance to ensure the reliability of the VA in a resource constrained environment. Finally a custom softcore microprocessor-based hardware implementation on a Field-Programmable Gate Array (FPGA) is used to verify the implementation feasibility of the presented model.
APA, Harvard, Vancouver, ISO, and other styles
30

Guerreiro, Andreia P., Carlos M. Fonseca, and Luís Paquete. "Greedy Hypervolume Subset Selection in Low Dimensions." Evolutionary Computation 24, no. 3 (September 2016): 521–44. http://dx.doi.org/10.1162/evco_a_00188.

Full text
Abstract:
Given a nondominated point set [Formula: see text] of size [Formula: see text] and a suitable reference point [Formula: see text], the Hypervolume Subset Selection Problem (HSSP) consists of finding a subset of size [Formula: see text] that maximizes the hypervolume indicator. It arises in connection with multiobjective selection and archiving strategies, as well as Pareto-front approximation postprocessing for visualization and/or interaction with a decision maker. Efficient algorithms to solve the HSSP are available only for the 2-dimensional case, achieving a time complexity of [Formula: see text]. In contrast, the best upper bound available for [Formula: see text] is [Formula: see text]. Since the hypervolume indicator is a monotone submodular function, the HSSP can be approximated to a factor of [Formula: see text] using a greedy strategy. In this article, greedy [Formula: see text]-time algorithms for the HSSP in 2 and 3 dimensions are proposed, matching the complexity of current exact algorithms for the 2-dimensional case, and considerably improving upon recent complexity results for this approximation problem.
APA, Harvard, Vancouver, ISO, and other styles
31

Ahmed, Asad, Osman Hasan, Falah Awwad, Nabil Bastaki, and Syed Rafay Hasan. "Formal Asymptotic Analysis of Online Scheduling Algorithms for Plug-In Electric Vehicles’ Charging." Energies 12, no. 1 (December 21, 2018): 19. http://dx.doi.org/10.3390/en12010019.

Full text
Abstract:
A large-scale integration of plug-in electric vehicles (PEVs) into the power grid system has necessitated the design of online scheduling algorithms to accommodate the after-effects of this new type of load, i.e., PEVs, on the overall efficiency of the power system. In online settings, the low computational complexity of the corresponding scheduling algorithms is of paramount importance for the reliable, secure, and efficient operation of the grid system. Generally, the computational complexity of an algorithm is computed using asymptotic analysis. Traditionally, the analysis is performed using the paper-pencil proof method, which is error-prone and thus not suitable for analyzing the mission-critical online scheduling algorithms for PEV charging. To overcome these issues, this paper presents a formal asymptotic analysis approach for online scheduling algorithms for PEV charging using higher-order-logic theorem proving, which is a sound computer-based verification approach. For illustration purposes, we present the complexity analysis of two state-of-the-art online algorithms: the Online cooRdinated CHARging Decision (ORCHARD) algorithm and online Expected Load Flattening (ELF) algorithm.
APA, Harvard, Vancouver, ISO, and other styles
32

Kim, Bongseok, Youngseok Jin, Youngdoo Choi, Jonghun Lee, and Sangdong Kim. "Low-Complexity Super-Resolution Detection for Range-Vital Doppler Estimation FMCW Radar." Journal of Electromagnetic Engineering and Science 21, no. 3 (July 31, 2021): 236–45. http://dx.doi.org/10.26866/jees.2021.3.r.31.

Full text
Abstract:
This paper proposes low-complexity super-resolution detection for range-vital Doppler estimation frequency-modulated continuous wave (FMCW) radar. In regards to vital radar, and in order to estimate joint range and vital Doppler information such as the human heartbeat and respiration, two-dimensional (2D) detection algorithms such as 2D-FFT (fast Fourier transform) and 2D-MUSIC (multiple signal classification) are required. However, due to the high complexity of 2D full-search algorithms, it is difficult to apply this process to low-cost vital FMCW systems. In this paper, we propose a method to estimate the range and vital Doppler parameters by using 1D-FFT and 1D-MUSIC algorithms, respectively. Among 1D-FFT outputs for range detection, we extract 1D-FFT results based solely on human target information with phase variation of respiration for each chirp; subsequently, the 1D-MUSIC algorithm is employed to obtain accurate vital Doppler results. By reducing the dimensions of the estimation algorithm from 2D to 1D, the computational burden is reduced. In order to verify the performance of the proposed algorithm, we compare the Monte Carlo simulation and root-mean-square error results. The simulation and experiment results show that the complexity of the proposed algorithm is significantly lower than that of an algorithm detecting signals in several regions.
APA, Harvard, Vancouver, ISO, and other styles
33

Keyes, D. E., H. Ltaief, and G. Turkiyyah. "Hierarchical algorithms on hierarchical architectures." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 378, no. 2166 (January 20, 2020): 20190055. http://dx.doi.org/10.1098/rsta.2019.0055.

Full text
Abstract:
A traditional goal of algorithmic optimality, squeezing out flops, has been superseded by evolution in architecture. Flops no longer serve as a reasonable proxy for all aspects of complexity. Instead, algorithms must now squeeze memory, data transfers, and synchronizations, while extra flops on locally cached data represent only small costs in time and energy. Hierarchically low-rank matrices realize a rarely achieved combination of optimal storage complexity and high-computational intensity for a wide class of formally dense linear operators that arise in applications for which exascale computers are being constructed. They may be regarded as algebraic generalizations of the fast multipole method. Methods based on these hierarchical data structures and their simpler cousins, tile low-rank matrices, are well proportioned for early exascale computer architectures, which are provisioned for high processing power relative to memory capacity and memory bandwidth. They are ushering in a renaissance of computational linear algebra. A challenge is that emerging hardware architecture possesses hierarchies of its own that do not generally align with those of the algorithm. We describe modules of a software toolkit, hierarchical computations on manycore architectures, that illustrate these features and are intended as building blocks of applications, such as matrix-free higher-order methods in optimization and large-scale spatial statistics. Some modules of this open-source project have been adopted in the software libraries of major vendors. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.
APA, Harvard, Vancouver, ISO, and other styles
34

Feng, Hui, Xiaoqing Zhao, Zhengquan Li, and Song Xing. "A Novel Iterative Discrete Estimation Algorithm for Low-Complexity Signal Detection in Uplink Massive MIMO Systems." Electronics 8, no. 9 (September 2, 2019): 980. http://dx.doi.org/10.3390/electronics8090980.

Full text
Abstract:
In this paper, a novel iterative discrete estimation (IDE) algorithm, which is called the modified IDE (MIDE), is proposed to reduce the computational complexity in MIMO detection in uplink massive MIMO systems. MIDE is a revision of the alternating direction method of multipliers (ADMM)-based algorithm, in which a self-updating method is designed with the damping factor estimated and updated at each iteration based on the Euclidean distance between the iterative solutions of the IDE-based algorithm in order to accelerate the algorithm’s convergence. Compared to the existing ADMM-based detection algorithm, the overall computational complexity of the proposed MIDE algorithm is reduced from O N t 3 + O N r N t 2 to O N t 2 + O N r N t in terms of the number of complex-valued multiplications, where Ntand Nr are the number of users and the number of receiving antennas at the base station (BS), respectively. Simulation results show that the proposed MIDE algorithm performs better in terms of the bit error rate (BER) than some recently-proposed approximation algorithms in MIMO detection of uplink massive MIMO systems.
APA, Harvard, Vancouver, ISO, and other styles
35

Uddin, Zahoor, Ayaz Ahmad, Muhammad Iqbal, and Zeeshan Kaleem. "Adaptive Step Size Gradient Ascent ICA Algorithm for Wireless MIMO Systems." Mobile Information Systems 2018 (2018): 1–9. http://dx.doi.org/10.1155/2018/7038531.

Full text
Abstract:
Independent component analysis (ICA) is a technique of blind source separation (BSS) used for separation of the mixed received signals. ICA algorithms are classified into adaptive and batch algorithms. Adaptive algorithms perform well in time-varying scenario with high-computational complexity, while batch algorithms have better separation performance in quasistatic channels with low-computational complexity. Amongst batch algorithms, the gradient-based ICA algorithms perform well, but step size selection is critical in these algorithms. In this paper, an adaptive step size gradient ascent ICA (ASS-GAICA) algorithm is presented. The proposed algorithm is free from selection of the step size parameter with improved convergence and separation performance. Different performance evaluation criteria are used to verify the effectiveness of the proposed algorithm. Performance of the proposed algorithm is compared with the FastICA and optimum block adaptive ICA (OBAICA) algorithms for quasistatic and time-varying wireless channels. Simulation is performed over quadrature amplitude modulation (QAM) and binary phase shift keying (BPSK) signals. Results show that the proposed algorithm outperforms the FastICA and OBAICA algorithms for a wide range of signal-to-noise ratio (SNR) and input data block lengths.
APA, Harvard, Vancouver, ISO, and other styles
36

Nguyen, A. T., and V. Yu Tsviatkou. "Block-segment search of local extrema of images based on analysis of brightnesses of related pixels and areas." «System analysis and applied information science», no. 4 (December 30, 2019): 4–9. http://dx.doi.org/10.21122/2309-4923-2019-4-4-9.

Full text
Abstract:
The aim of the work is to develop an algorithm for extracting local extremes of images with low computational complexity and high accuracy. The known algorithms for block search for local extrema have low computational complexity, but only strict maxima and minima are distinguished without errors. The morphological search gives accurate results, highlighting the extreme areas formed by non-severe extremes, however, it has high computational complexity. The paper proposes a block-segment search algorithm for local extremums of images based on an analysis of the brightness of adjacent pixels and regions. The essence of the algorithm is to search for single-pixel local extremes and regions of uniform brightness, comparing the values of their boundary pixels with the values of the corresponding pixels of adjacent regions: the region is a local maximum (minimum) if the values of all its boundary pixels are larger (smaller) or equal to the values of all adjacent pixels. The developed algorithm, as well as the morphological search algorithm, allows detecting all single-pixel local extremes, as well as extreme areas, which exceeds the block search algorithms. At the same time, the developed algorithm in comparison with the morphological search algorithm requires much less time and RAM.
APA, Harvard, Vancouver, ISO, and other styles
37

Di Fiore, Carmine, Stefano Fanelli, and Paolo Zellini. "Low complexity secant quasi-Newton minimization algorithms for nonconvex functions." Journal of Computational and Applied Mathematics 210, no. 1-2 (December 2007): 167–74. http://dx.doi.org/10.1016/j.cam.2006.10.060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Yoo, Do-Sik. "A Low Complexity Subspace-Based DOA Estimation Algorithm with Uniform Linear Array Correlation Matrix Subsampling." International Journal of Antennas and Propagation 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/323545.

Full text
Abstract:
We propose a low complexity subspace-based direction-of-arrival (DOA) estimation algorithm employing a direct signal space construction method (DSPCM) by subsampling the autocorrelation matrix of a uniform linear array (ULA). Three major contributions of this paper are as follows. First of all, we introduce the method of autocorrelation matrix subsampling which enables us to employ a low complexity algorithm based on a ULA without computationally complex eigenvalue decomposition or singular-value decomposition. Secondly, we introduce a signal vector separation method to improve the distinguishability among signal vectors, which can greatly improve the performance, particularly, in low signal-to-noise ratio (SNR) regime. Thirdly, we provide a root finding (RF) method in addition to a spectral search (SS) method as the angle finding scheme. Through simulations, we illustrate that the performance of the proposed scheme is reasonably close to computationally much more expensive MUSIC- (MUltiple SIgnal Classification-) based algorithms. Finally, we illustrate that the computational complexity of the proposed scheme is reduced, in comparison with those of MUSIC-based schemes, by a factor ofO(N2/K), whereKis the number of sources andNis the number of antenna elements.
APA, Harvard, Vancouver, ISO, and other styles
39

Shin, JaeWook, Hyun Jae Baek, Bum Yong Park, and Jaegeol Cho. "A Sequential Selection Normalized Subband Adaptive Filter with Variable Step-Size Algorithms." Mathematical Problems in Engineering 2018 (July 17, 2018): 1–10. http://dx.doi.org/10.1155/2018/1941367.

Full text
Abstract:
This letter proposes a sequential selection normalized subband adaptive filter (SS-NSAF) in order to reduce the computational complexity. In addition, a variable step-size algorithm is also proposed using the mean-square deviation analysis of the SS-NSAF. To enhance the performance in terms of the convergence speed, we propose an improved variable step-size SS-NSAF using a two-stage concept. The simulation results show the low computational complexity and low misalignment errors using the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
40

Zečević, Žarko, and Maja Rolevski. "Neural Network Approach to MPPT Control and Irradiance Estimation." Applied Sciences 10, no. 15 (July 22, 2020): 5051. http://dx.doi.org/10.3390/app10155051.

Full text
Abstract:
Photovoltaic (PV) modules require maximum power point tracking (MPPT) algorithms to ensure that the amount of power extracted is maximized. In this paper, we propose a low-complexity MPPT algorithm that is based on the neural network (NN) model of the photovoltaic module. Namely, the expression for the output current of the NN model is used to derive the analytical, iterative rules for determining the maximal power point (MPP) voltage and irradiance estimation. In this way, the computational complexity is reduced compared to the other NN-based MPPT methods, in which the optimal voltage is predicted directly from the measurements. The proposed algorithm cannot instantaneously determine the optimal voltage, but it contains a tunable parameter for controlling the trade-off between the tracking speed and computational complexity. Numerical results indicate that the relative error between the actual maximum power and the one obtained by the proposed algorithm is less than 0.1%, which is up to ten times smaller than in the available algorithms.
APA, Harvard, Vancouver, ISO, and other styles
41

Khan, Imran, Mohammad Zafar, Majid Ashraf, and Sunghwan Kim. "Computationally Efficient Channel Estimation in 5G Massive Multiple-Input Multiple-output Systems." Electronics 7, no. 12 (December 3, 2018): 382. http://dx.doi.org/10.3390/electronics7120382.

Full text
Abstract:
Traditional channel estimation algorithms such as minimum mean square error (MMSE) are widely used in massive multiple-input multiple-output (MIMO) systems, but require a matrix inversion operation and an enormous amount of computations, which result in high computational complexity and make them impractical to implement. To overcome the matrix inversion problem, we propose a computationally efficient hybrid steepest descent Gauss–Seidel (SDGS) joint detection, which directly estimates the user’s transmitted symbol vector, and can quickly converge to obtain an ideal estimation value with a few simple iterations. Moreover, signal detection performance was further improved by utilizing the bit log-likelihood ratio (LLR) for soft channel decoding. Simulation results showed that the proposed algorithm had better channel estimation performance, which improved the signal detection by 31.68% while the complexity was reduced by 45.72%, compared with the existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
42

Troiani, Chiara, Agostino Martinelli, Christian Laugier, and Davide Scaramuzza. "Low computational-complexity algorithms for vision-aided inertial navigation of micro aerial vehicles." Robotics and Autonomous Systems 69 (July 2015): 80–97. http://dx.doi.org/10.1016/j.robot.2014.08.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Abadi, Mohammad Shams Esfand, and Ali-Reza Danaee. "Low computational complexity family of affine projection algorithms over adaptive distributed incremental networks." AEU - International Journal of Electronics and Communications 68, no. 2 (February 2014): 97–110. http://dx.doi.org/10.1016/j.aeue.2013.07.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kurniawan, Rudi, Zahrul Fuadi, and Ramzi Adriman. "Accumulator-free Hough Transform for Sequence Collinear Points." Aceh International Journal of Science and Technology 10, no. 2 (September 9, 2021): 74–83. http://dx.doi.org/10.13170/aijst.10.2.20894.

Full text
Abstract:
The perception, localization, and navigation of its environment are essential for autonomous mobile robots and vehicles. For that reason, a 2D Laser rangefinder sensor is used popularly in mobile robot applications to measure the origin of the robot to its surrounding objects. The measurement data generated by the sensor is transmitted to the controller, where the data is processed by one or multiple suitable algorithms in several steps to extract the desired information. Universal Hough Transform (UHT) is one of the appropriate and popular algorithms to extract the primitive geometry such as straight line, which later will be used in the further step of data processing. However, the UHT has high computational complexity and requires the so-called accumulator array, which is less suitable for real-time applications where a high speed and low complexity computation is highly demanded. In this study, an Accumulator-free Hough Transform (AfHT) is proposed to reduce the computational complexity and eliminate the need for the accumulator array. The proposed algorithm is validated using the measurement data from a 2D laser scanner and compared to the standard Hough Transform. As a result, the extracted value of AfHT shows a good agreement with that of UHT but with a significant reduction in the complexity of the computation and the need for computer memory.
APA, Harvard, Vancouver, ISO, and other styles
45

Schager, Alexander, Gerald Zauner, Günther Mayr, and Peter Burgholzer. "Extension of the Thermographic Signal Reconstruction Technique for an Automated Segmentation and Depth Estimation of Subsurface Defects." Journal of Imaging 6, no. 9 (September 11, 2020): 96. http://dx.doi.org/10.3390/jimaging6090096.

Full text
Abstract:
With increased use of light-weight materials with low factors of safety, non-destructive testing becomes increasingly important. Thanks to the advancement of infrared camera technology, pulse thermography is a cost efficient way to detect subsurface defects non-destructively. However, currently available evaluation algorithms have either a high computational cost or show poor performance if any geometry other than the most simple kind is surveyed. We present an extension of the thermographic signal reconstruction technique which can automatically segment and image defects from sound areas, while also estimating the defect depth, all with low computational cost. We verified our algorithm using real world measurements and compare our results to standard active thermography algorithms with similar computational complexity. We found that our algorithm can detect defects more accurately, especially when more complex geometries are examined.
APA, Harvard, Vancouver, ISO, and other styles
46

Geraldo, Issa Cherif. "An Automated Profile-Likelihood-Based Algorithm for Fast Computation of the Maximum Likelihood Estimate in a Statistical Model for Crash Data." Journal of Applied Mathematics 2022 (October 26, 2022): 1–11. http://dx.doi.org/10.1155/2022/6974166.

Full text
Abstract:
Numerical computation of maximum likelihood estimates (MLE) is one of the most common problems encountered in applied statistics. Even if there exist many algorithms considered as performing, they can suffer in some cases for one or many of the following criteria: global convergence (capacity of an algorithm to converge to the true unknown solution from all starting guesses), numerical stability (ascent property), implementation feasibility (for example, algorithms requiring matrix inversion cannot be implemented when the involved matrices are not invertible), low computation time, low computational complexity, and capacity to handle high dimensional problems. The reality is that, in practice, no algorithm is perfect, and for each problem, it is necessary to find the most performing of all existing algorithms or even develop new ones. In this paper, we consider the computing of the maximum likelihood estimate of the vector parameter of a statistical model of crash frequencies. We split the parameter vector, and we develop a new estimation algorithm using the profile likelihood principle. We provide an automatic starting guess for which convergence and numerical stability are guaranteed. We study the performance of our new algorithm on simulated data by comparing it to some of the most famous and modern optimization algorithms. The results suggest that our proposed algorithm outperforms these algorithms.
APA, Harvard, Vancouver, ISO, and other styles
47

Wo, Tianbin, and Peter Adam Hoeher. "Low-Complexity Gaussian Detection for MIMO Systems." Journal of Electrical and Computer Engineering 2010 (2010): 1–12. http://dx.doi.org/10.1155/2010/609509.

Full text
Abstract:
For single-carrier transmission over delay-spread multi-input multi-output (MIMO) channels, the computational complexity of the receiver is often considered as a bottleneck with respect to (w.r.t.) practical implementations. Multi-antenna interference (MAI) together with intersymbol interference (ISI) provides fundamental challenges for efficient and reliable data detection. In this paper, we carry out a systematic study on the interference structure of MIMO-ISI channels, and sequentially deduce three different Gaussian approximations to simplify the calculation of the global likelihood function. Using factor graphs as a general framework and applying the Gaussian approximation, three low-complexity iterative detection algorithms are derived, and their performances are compared by means of Monte Carlo simulations. After a careful inspection of their merits and demerits, we propose a graph-based iterative Gaussian detector (GIGD) for severely delay-spread MIMO channels. The GIGD is characterized by a strictly linear computational complexity w.r.t. the effective channel memory length, the number of transmit antennas, and the number of receive antennas. When the channel has a sparse ISI structure, the complexity of the GIGD is strictly proportional to the number of nonzero channel taps. Finally, the GIGD provides a near-optimum performance in terms of the bit error rate (BER) for repetition encoded MIMO systems.
APA, Harvard, Vancouver, ISO, and other styles
48

Sun, Qiang, Xin Wang, Jue Wang, and Chen Xu. "Joint Antenna Selection and Precoding Optimization for Small-Cell Network with Minimum Power Consumption." International Journal of Antennas and Propagation 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/4302950.

Full text
Abstract:
We focus on the power consumption problem for a downlink multiuser small-cell network (SCN) considering both the quality of service (QoS) and power constraints. First based on a practical power consumption model taking into account both the dynamic transmit power and static circuit power, we formulate and then transform the power consumption optimization problem into a convex problem by using semidefinite relaxation (SDR) technique and obtain the optimal solution by the CVX tool. We further note that the SDR-based solution becomes infeasible for realistic implementation due to its heavy backhaul burden and computational complexity. To this end, we propose an alternative suboptimal algorithm which has low implementation overhead and complexity, based on minimum mean square error (MMSE) precoding. Furthermore, we propose a distributed correlation-based antenna selection (DCAS) algorithm combining with our optimization algorithms to reduce the static circuit power consumption for the SCN. Finally, simulation results demonstrate that our proposed suboptimal algorithm is very effective on power consumption minimization, with significantly reduced backhaul burden and computational complexity. Moreover, we show that our optimization algorithms with DCAS have less power consumption than the other benchmark algorithms.
APA, Harvard, Vancouver, ISO, and other styles
49

Dehghan Firoozabadi, Ali, Pablo Irarrazaval, Pablo Adasme, David Zabala-Blanco, Pablo Palacios Játiva, and Cesar Azurdia-Meza. "3D Multiple Sound Source Localization by Proposed T-Shaped Circular Distributed Microphone Arrays in Combination with GEVD and Adaptive GCC-PHAT/ML Algorithms." Sensors 22, no. 3 (January 28, 2022): 1011. http://dx.doi.org/10.3390/s22031011.

Full text
Abstract:
Multiple simultaneous sound source localization (SSL) is one of the most important applications in the speech signal processing. The one-step algorithms with the advantage of low computational complexity (and low accuracy), and the two-step methods with high accuracy (and high computational complexity) are proposed for multiple SSL. In this article, a combination of one-step-based method based on the generalized eigenvalue decomposition (GEVD), and a two-step-based method based on the adaptive generalized cross-correlation (GCC) by using the phase transform/maximum likelihood (PHAT/ML) filters along with a novel T-shaped circular distributed microphone array (TCDMA) is proposed for 3D multiple simultaneous SSL. In addition, the low computational complexity advantage of the GCC algorithm is considered in combination with the high accuracy of the GEVD method by using the distributed microphone array to eliminate spatial aliasing and thus obtain more appropriate information. The proposed T-shaped circular distributed microphone array-based adaptive GEVD and GCC-PHAT/ML algorithms (TCDMA-AGGPM) is compared with hierarchical grid refinement (HiGRID), temporal extension of multiple response model of sparse Bayesian learning with spherical harmonic (SH) extension (SH-TMSBL), sound field morphological component analysis (SF-MCA), and time-frequency mixture weight Bayesian nonparametric acoustical holography beamforming (TF-MW-BNP-AHB) methods based on the mean absolute estimation error (MAEE) criteria in noisy and reverberant environments on simulated and real data. The superiority of the proposed method is presented by showing the high accuracy and low computational complexity for 3D multiple simultaneous SSL.
APA, Harvard, Vancouver, ISO, and other styles
50

Abdi, Fatemeh, Parviz Amiri, Mohammad Hosien Refan, Manfred Reddig, and Ralph Kennel. "Novel Adaptive Controller for Buck Converter with High Resource Efficiency and Low Computational Complexity." Journal of Circuits, Systems and Computers 29, no. 14 (July 2, 2020): 2050230. http://dx.doi.org/10.1142/s0218126620502308.

Full text
Abstract:
Power converters are used in a wide range of industrial processes. Computational complexity, tracking ability, and calculation accuracy are the main parameters that affect the switching performance of power converters. One of the major parts of switch-mode power converters is the controllers which are essential for proper operation. A new adaptive controller is proposed to reduce the computational complexity, the algorithm is presented based on Improved Variable Forgetting Factor (IVFF), Leading Dichotomous Coordinate Descent (DCD), and Exponentially-weighted Recursive Least Square (ERLS). The proposed method estimates the system coefficients with 98% accuracy. The settling time of the output voltage is 0.008[Formula: see text]ms which is faster than other algorithms. According to Leading DCD, this structure needs no multiplier and divider blocks. VFF leads to the improvement of the tracking ability and convergence rate in the system variations. This structure can be implemented on any application that needs an optimal controller. The Vedic mathematics as a multiplier operation is used in the structure of the improved VFF for reducing the calculation delay and area. The error of the proposed method converges to zero with lower than 60 iterations. In other words, the proposed algorithm calculates the optimal coefficients with lower than 50 iterations and is faster than another algorithm.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography