Journal articles on the topic 'Novel recovery algorithms'

To see the other types of publications on this topic, follow the link: Novel recovery algorithms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Novel recovery algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Liang, Rui Hua, Xin Peng Du, Qing Bo Zhao, and Li Zhi Cheng. "Sparse Signal Recovery Based on Simulated Annealing." Applied Mechanics and Materials 321-324 (June 2013): 1295–98. http://dx.doi.org/10.4028/www.scientific.net/amm.321-324.1295.

Full text
Abstract:
Sparse signal recovery is a hot topic in the fields of optimization theory and signal processing. Two main algorithmic approaches, i.e. greedy pursuit algorithms and convex relaxation algorithms have been extensively used to solve this problem. However, these algorithms cannot guarantee to find the global optimum solution, and then they perform poorly when the sparsity level is relatively large. Based on the simulated annealing algorithm and greedy pursuit algorithms, we propose a novel algorithm on solving the sparse recovery problem. Numerical simulations show that the proposed algorithm has very good recovery performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhu, Ying, Yong Xing Jia, Chuan Zhen Rong, and Yu Yang. "Study on Compressed Sensing Recovery Algorithms." Applied Mechanics and Materials 433-435 (October 2013): 322–25. http://dx.doi.org/10.4028/www.scientific.net/amm.433-435.322.

Full text
Abstract:
Abastruct. Compressive sensing is a novel signal sampling theory under the condition that the signalis sparse or compressible.In this case,the small amount of signal values can be reconstructed when signal is sparse or compressible.This paper has reviewed the idea of OMP,GBP and SP,given algorithms and analyzed the experiment results,suggested some improvements.
APA, Harvard, Vancouver, ISO, and other styles
3

Battiston, Adrian, Inna Sharf, and Meyer Nahon. "Attitude estimation for collision recovery of a quadcopter unmanned aerial vehicle." International Journal of Robotics Research 38, no. 10-11 (August 8, 2019): 1286–306. http://dx.doi.org/10.1177/0278364919867397.

Full text
Abstract:
An extensive evaluation of attitude estimation algorithms in simulation and experiments is performed to determine their suitability for a collision recovery pipeline of a quadcopter unmanned aerial vehicle. A multiplicative extended Kalman filter (MEKF), unscented Kalman filter (UKF), complementary filter, [Formula: see text] filter, and novel adaptive varieties of the selected filters are compared. The experimental quadcopter uses a PixHawk flight controller, and the algorithms are implemented using data from only the PixHawk inertial measurement unit (IMU). Performance of the aforementioned filters is first evaluated in a simulation environment using modified sensor models to capture the effects of collision on inertial measurements. Simulation results help define the efficacy and use cases of the conventional and novel algorithms in a quadcopter collision scenario. An analogous evaluation is then conducted by post-processing logged sensor data from collision flight tests, to gain new insights into algorithms’ performance in the transition from simulated to real data. The post-processing evaluation compares each algorithm’s attitude estimate, including the stock attitude estimator of the PixHawk controller, to data collected by an offboard infrared motion capture system. Based on this evaluation, two promising algorithms, the MEKF and an adaptive [Formula: see text] filter, are selected for implementation on the physical quadcopter in the control loop of the collision recovery pipeline. Experimental results show an improvement in the metric used to evaluate experimental performance, the time taken to recover from the collision, when compared with the stock attitude estimator on the PixHawk (PX4) software.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Runsong, Xuelian Li, Juntao Gao, Hui Li, and Baocang Wang. "Quantum rotational cryptanalysis for preimage recovery of round-reduced Keccak." Quantum Information & Computation 23, no. 3&4 (February 2023): 223–34. http://dx.doi.org/10.26421/qic23.3-4-3.

Full text
Abstract:
The Exclusive-OR Sum-of-Product (ESOP) minimization problem has long been of interest to the research community because of its importance in classical logic design (including low-power design and design for test), reversible logic synthesis, and knowledge discovery, among other applications. However, no exact minimal minimization method has been presented for more than seven variables on arbitrary functions. This paper presents a novel quantum-classical hybrid algorithm for the exact minimal ESOP minimization of incompletely specified Boolean functions. This algorithm constructs oracles from sets of constraints and leverages the quantum speedup offered by Grover's algorithm to find solutions to these oracles, thereby improving over classical algorithms. Improved encoding of ESOP expressions results in substantially fewer decision variables compared to many existing algorithms for many classes of Boolean functions. This paper also extends the idea of exact minimal ESOP minimization to additionally minimize the cost of realizing an ESOP expression as a quantum circuit. To the extent of the authors' knowledge, such a method has never been published. This algorithm was tested on completely and incompletely specified Boolean functions via quantum simulation.
APA, Harvard, Vancouver, ISO, and other styles
5

Acharya, Deep Shekhar, and Sudhansu Kumar Mishra. "Optimal Consensus Recovery of Multi-agent System Subjected to Agent Failure." International Journal on Artificial Intelligence Tools 29, no. 06 (September 2020): 2050017. http://dx.doi.org/10.1142/s0218213020500177.

Full text
Abstract:
Multi-Agent Systems are susceptible to external disturbances, sensor failures or collapse of communication channel/media. Such failures disconnect the agent network and thereby hamper the consensus of the system. Quick recovery of consensus is vital to continue the normal operation of an agent-based system. However, only limited works in the past have investigated the problem of recovering the consensus of an agent-based system in the event of a failure. This work proposes a novel algorithmic approach to recover the lost consensus, when an agent-based system is subject to the failure of an agent. The main focus of the algorithm is to reconnect the multi-agent network in a way so as to increase the connectivity of the network, post recovery. The proposed algorithm may be applied to both linear and non-linear continuous-time consensus protocols. To verify the efficiency of the proposed algorithm, it has been applied and tested on two multi-agent networks. The results, thus obtained, have been compared with other state-of-the-art recovery algorithms. Finally, it has been established that the proposed algorithm achieves better connectivity and therefore, faster consensus when compared to the other state-of-the-art.
APA, Harvard, Vancouver, ISO, and other styles
6

Shukla, Vasundhara, and Preety D. Swami. "Sparse Signal Recovery through Long Short-Term Memory Networks for Compressive Sensing-Based Speech Enhancement." Electronics 12, no. 14 (July 17, 2023): 3097. http://dx.doi.org/10.3390/electronics12143097.

Full text
Abstract:
This paper presents a novel speech enhancement approach based on compressive sensing (CS) which uses long short-term memory (LSTM) networks for the simultaneous recovery and enhancement of the compressed speech signals. The advantage of this algorithm is that it does not require an iterative process to recover the compressed signals, which makes the recovery process fast and straight forward. Furthermore, the proposed approach does not require prior knowledge of signal and noise statistical properties for sensing matrix optimization because the used LSTM can directly extract and learn the required information from the training data. The proposed technique is evaluated against white, babble, and f-16 noises. To validate the effectiveness of the proposed approach, perceptual evaluation of speech quality (PESQ), short-time objective intelligibility (STOI), and signal-to-distortion ratio (SDR) were compared to other variants of OMP-based CS algorithms The experimental outcomes show that the proposed approach achieves the maximum improvements of 50.06%, 43.65%, and 374.16% for PESQ, STOI, and SDR respectively, over the different variants of OMP-based CS algorithms.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Hongyang, Zhouchen Lin, Chao Zhang, and Junbin Gao. "Relations Among Some Low-Rank Subspace Recovery Models." Neural Computation 27, no. 9 (September 2015): 1915–50. http://dx.doi.org/10.1162/neco_a_00762.

Full text
Abstract:
Recovering intrinsic low-dimensional subspaces from data distributed on them is a key preprocessing step to many applications. In recent years, a lot of work has modeled subspace recovery as low-rank minimization problems. We find that some representative models, such as robust principal component analysis (R-PCA), robust low-rank representation (R-LRR), and robust latent low-rank representation (R-LatLRR), are actually deeply connected. More specifically, we discover that once a solution to one of the models is obtained, we can obtain the solutions to other models in closed-form formulations. Since R-PCA is the simplest, our discovery makes it the center of low-rank subspace recovery models. Our work has two important implications. First, R-PCA has a solid theoretical foundation. Under certain conditions, we could find globally optimal solutions to these low-rank models at an overwhelming probability, although these models are nonconvex. Second, we can obtain significantly faster algorithms for these models by solving R-PCA first. The computation cost can be further cut by applying low-complexity randomized algorithms, for example, our novel [Formula: see text] filtering algorithm, to R-PCA. Although for the moment the formal proof of our [Formula: see text] filtering algorithm is not yet available, experiments verify the advantages of our algorithm over other state-of-the-art methods based on the alternating direction method.
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Cheng, Tong Wang, Kun Liu, and Xinying Zhang. "A Novel Sparse Bayesian Space-Time Adaptive Processing Algorithm to Mitigate Off-Grid Effects." Remote Sensing 14, no. 16 (August 11, 2022): 3906. http://dx.doi.org/10.3390/rs14163906.

Full text
Abstract:
Space-time adaptive processing (STAP) algorithms based on sparse recovery (SR) have been researched because of their low requirement of training snapshots. However, once some portion of clutter is not located on the grids, i.e., off-grid problems, the performances of most SR-STAP algorithms degrade significantly. Reducing the grid interval can mitigate off-grid effects, but brings strong column coherence of the dictionary, heavy computational load, and heavy storage load. A sparse Bayesian learning approach is proposed to mitigate the off-grid effects in the paper. The algorithm employs an efficient sequential addition and deletion of dictionary atoms to estimate the clutter subspace, which means that strong column coherence has no effect on the performance of the proposed algorithm. Besides, the proposed algorithm does not require much computational load and storage load. Off-grid effects can be mitigated with the proposed algorithm when the grid-interval is sufficiently small. The excellent performance of the novel algorithm is demonstrated on the simulated data.
APA, Harvard, Vancouver, ISO, and other styles
9

Song, Chen, Jiarui Deng, Zehao Liu, Bingnan Wang, Yirong Wu, and Hui Bi. "Complex-Valued Sparse SAR-Image-Based Target Detection and Classification." Remote Sensing 14, no. 17 (September 2, 2022): 4366. http://dx.doi.org/10.3390/rs14174366.

Full text
Abstract:
It is known that synthetic aperture radar (SAR) images obtained by typical matched filtering (MF)-based algorithms always suffer from serious noise, sidelobes and clutter. However, the improvement in image quality means that the complexity of SAR systems will increase, which affects the applications of SAR images. The introduction of sparse signal processing technologies into SAR imaging proposes a new way to solve this problem. Sparse SAR images obtained by sparse recovery algorithms show better image performance than typical complex SAR images with lower sidelobes and higher signal-to-noise ratios (SNR). As the most widely applied fields of SAR images, target detection and target classification rely on SAR images with high quality. Therefore, in this paper, a target detection framework based on sparse images recovered by complex approximate message passing (CAMP) algorithm and a novel classification network via sparse images reconstructed by the new iterative soft thresholding (BiIST) algorithm are proposed. Experimental results show that sparse SAR images have better performance whether for target classification or for target detection than the images recovered by MF-based algorithms, which validates the huge application potentials of sparse images.
APA, Harvard, Vancouver, ISO, and other styles
10

Malik, Jameel, Ahmed Elhayek, and Didier Stricker. "WHSP-Net: A Weakly-Supervised Approach for 3D Hand Shape and Pose Recovery from a Single Depth Image." Sensors 19, no. 17 (August 31, 2019): 3784. http://dx.doi.org/10.3390/s19173784.

Full text
Abstract:
Hand shape and pose recovery is essential for many computer vision applications such as animation of a personalized hand mesh in a virtual environment. Although there are many hand pose estimation methods, only a few deep learning based algorithms target 3D hand shape and pose from a single RGB or depth image. Jointly estimating hand shape and pose is very challenging because none of the existing real benchmarks provides ground truth hand shape. For this reason, we propose a novel weakly-supervised approach for 3D hand shape and pose recovery (named WHSP-Net) from a single depth image by learning shapes from unlabeled real data and labeled synthetic data. To this end, we propose a novel framework which consists of three novel components. The first is the Convolutional Neural Network (CNN) based deep network which produces 3D joints positions from learned 3D bone vectors using a new layer. The second is a novel shape decoder that recovers dense 3D hand mesh from sparse joints. The third is a novel depth synthesizer which reconstructs 2D depth image from 3D hand mesh. The whole pipeline is fine-tuned in an end-to-end manner. We demonstrate that our approach recovers reasonable hand shapes from real world datasets as well as from live stream of depth camera in real-time. Our algorithm outperforms state-of-the-art methods that output more than the joint positions and shows competitive performance on 3D pose estimation task.
APA, Harvard, Vancouver, ISO, and other styles
11

Meng, Dandan, Xianpeng Wang, Mengxing Huang, Chong Shen, and Guoan Bi. "Weighted Block Sparse Recovery Algorithm for High Resolution DOA Estimation with Unknown Mutual Coupling." Electronics 7, no. 10 (September 25, 2018): 217. http://dx.doi.org/10.3390/electronics7100217.

Full text
Abstract:
Based on weighted block sparse recovery, a high resolution direction-of-arrival (DOA) estimation algorithm is proposed for data with unknown mutual coupling. In our proposed method, a new block representation model based on the array covariance vectors is firstly formulated to avoid the influence of unknown mutual coupling by utilizing the inherent structure of the steering vector. Then a weighted 1l -norm penalty algorithm is proposed to recover the block sparse matrix, in which the weighted matrix is constructed based on the principle of a novel Capon space spectrum function for increasing the sparsity of solution. Finally, the DOAs can be obtained from the position of the non-zero blocks of the recovered sparse matrix. Due to the use of the whole received data of array and the enhanced sparsity of solution, the proposed method effectively avoids the loss of the array aperture to achieve a better estimation performance in the environment of unknown mutual coupling in terms of both spatial resolution and accuracy. Simulation experiments show the proposed method achieves better performance than other existing algorithms to minimize the effects of unknown mutual coupling.
APA, Harvard, Vancouver, ISO, and other styles
12

KANKAM, Kunrada, Prasit CHOLAMJİAK, and Watcharaporn CHOLAMJİAK. "A modified parallel monotone hybrid algorithm for a finite family of $\mathcal{G}$-nonexpansive mappings apply to a novel signal recovery." Results in Nonlinear Analysis 5, no. 3 (September 30, 2022): 393–411. http://dx.doi.org/10.53006/rna.1122092.

Full text
Abstract:
In this work, we investigate the strong convergence of the sequences generated by the shrinking projection method and the parallel monotone hybrid method to find a common fixed point of a finite family of $\mathcal{G}$-nonexpansive mappings under suitable conditions in Hilbert spaces endowed with graphs. We also give some numerical examples and provide application to signal recovery under situation without knowing the type of noises. Moreover, numerical experiments of our algorithms which are defined by different types of blurred matrices and noises on the algorithm to show the efficiency and the implementation for LASSO problem in signal recovery.
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Yangyang, Jianping Zhang, Guiling Sun, and Dongxue Lu. "The Sparsity Adaptive Reconstruction Algorithm Based on Simulated Annealing for Compressed Sensing." Journal of Electrical and Computer Engineering 2019 (July 14, 2019): 1–8. http://dx.doi.org/10.1155/2019/6950819.

Full text
Abstract:
This paper proposes a novel sparsity adaptive simulated annealing algorithm to solve the issue of sparse recovery. This algorithm combines the advantage of the sparsity adaptive matching pursuit (SAMP) algorithm and the simulated annealing method in global searching for the recovery of the sparse signal. First, we calculate the sparsity and the initial support collection as the initial search points of the proposed optimization algorithm by using the idea of SAMP. Then, we design a two-cycle reconstruction method to find the support sets efficiently and accurately by updating the optimization direction. Finally, we take advantage of the sparsity adaptive simulated annealing algorithm in global optimization to guide the sparse reconstruction. The proposed sparsity adaptive greedy pursuit model has a simple geometric structure, it can get the global optimal solution, and it is better than the greedy algorithm in terms of recovery quality. Our experimental results validate that the proposed algorithm outperforms existing state-of-the-art sparse reconstruction algorithms.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhao, Hongwei, Zichun Zhang, Xiaozhu Shi, and Yihui Yin. "A novel demodulation algorithm for VHF Data Broadcast signals in multi-sources augmentation navigation system." International Journal of Distributed Sensor Networks 16, no. 3 (March 2020): 155014772091477. http://dx.doi.org/10.1177/1550147720914770.

Full text
Abstract:
The augmentation navigation system based on multi-source information fusion can significantly improve position accuracy, and the multi-source information is usually transmitted through VHF Data Broadcast . Aiming at the burst characteristics of VHF Data Broadcast, this article proposed a novel demodulation algorithm based on open-loop structure. When a VHF Data Broadcast burst is detected, the timing recovery should be finished first, and the value of cross-correlation between the timing-recovered signal and the local training symbol is calculated to complete the frame synchronization. Then, the data-aided and non-data-aided algorithms are used to estimate the frequency offset. Finally, the phase offset is estimated and the carrier synchronization is accomplished. The simulation results demonstrate that the proposed algorithm can quickly accomplished carrier synchronization without using feedback-loop structure, and the bit error rate is less than 10−4 when the signal-to-noise ratio is greater than 17 dB, which satisfy the requirement of receiving VHF Data Broadcast signals in augmentation navigation system. Therefore, the proposed algorithm can be used for receiving VHF Data Broadcast signals.
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Yinhai, Fei Wang, and Xinhua Hu. "Deep-Learning-Based 3D Reconstruction: A Review and Applications." Applied Bionics and Biomechanics 2022 (September 15, 2022): 1–6. http://dx.doi.org/10.1155/2022/3458717.

Full text
Abstract:
In recent years, deep learning models have been widely used in 3D reconstruction fields and have made remarkable progress. How to stimulate deep academic interest to effectively manage the explosive augmentation of 3D models has been a research hotspot. This work shows mainstream 3D model retrieval algorithm programs based on deep learning currently developed remotely, and further subdivides their advantages and disadvantages according to the behavior evaluation of the algorithm programs obtained by trial. According to other restoration applications, the main 3D model retrieval algorithms can be divided into two categories: (1) 3D standard restoration methods supported by the model, i.e., both the restored object and the recalled object are 3D models. It can be further divided into voxel-based, point coloring-based, and appearance-based methods, and (2) cross-domain 3D model recovery methods supported by 2D replicas, that is, the retrieval motivation is 2D images, and the recovery appearance is 3D models, including retrieval methods supported by 2D display, 2D depiction-based realistic replication and 3D mold recovery methods. Finally, the work proposed novel 3D fashion retrieval algorithms supported by deep science that are analyzed and ventilated, and the unaccustomed directions of future development are prospected.
APA, Harvard, Vancouver, ISO, and other styles
16

Liu, Zhuoliu, Luwei Fu, Maojun Pan, and Zhiwei Zhao. "Lightweight Path Recovery in IPv6 Internet-of-Things Systems." Electronics 11, no. 8 (April 12, 2022): 1220. http://dx.doi.org/10.3390/electronics11081220.

Full text
Abstract:
In an Internet-of-Things system supported by Internet Protocol version 6 (IPv6), the Routing Protocol for Low-Power and Lossy Networks (RPL) presents extensive applications in various network scenarios. In these novel scenarios characterized by the access of massive devices, path recovery, which reconstructs the complete path of the packet transmission, plays a vital role in network measurement, topology inference, and information security. This paper proposes a Lightweight Path recovery algorithm (LiPa) for multi-hop point-to-point communication. The core idea of LiPa is to make full use of the spatial and temporal information of the network topology to recover the unknown paths iteratively. Specifically, spatial and temporal information refer to the potential correlations between different paths within a time slot and path status during different time slots, respectively. To verify the effect of our proposal, we separately analyze the performance of leveraging temporal information, spatial information, and their composition by extensive simulations. We also compare LiPa with two state-of-the-art methods in terms of the recovery accuracy and the gain–loss ratio. The experiment results show that LiPa significantly outperforms all its counterpart algorithms in different network settings. Thus, LiPa can be considered as a promising approach for packet-level path recovery with minor loss and great adaptability.
APA, Harvard, Vancouver, ISO, and other styles
17

KHONSARI, A., H. SARBAZI-AZAD, and M. OULD-KHAOUA. "A Performance Model of Software-Based Deadlock Recovery Routing Algorithm in Hypercubes." Parallel Processing Letters 15, no. 01n02 (March 2005): 153–68. http://dx.doi.org/10.1142/s012962640500212x.

Full text
Abstract:
Recent studies have revealed that deadlocks are generally infrequent in the network. Thus the hardware resources, e.g. virtual channels, dedicated for deadlock avoidance are not utilised most of the time. This consideration has motivated the development of novel adaptive routing algorithms with deadlock recovery. This paper describes a new analytical model to predict message latency in hypercubes with a true fully adaptive routing algorithm with progressive deadlock recovery. One of the main features of the proposed model is the use of results from queueing systems with impatient customers to capture the effects of the timeout mechanism used in this routing algorithm for deadlock detection. The validity of the model is demonstrated by comparing analytical results with those obtained through simulation experiments.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang, Rui, Di Xiao, and Yanting Chang. "A Novel Image Authentication with Tamper Localization and Self-Recovery in Encrypted Domain Based on Compressive Sensing." Security and Communication Networks 2018 (2018): 1–15. http://dx.doi.org/10.1155/2018/1591206.

Full text
Abstract:
This paper proposes a novel tamper detection, localization, and recovery scheme for encrypted images with Discrete Wavelet Transformation (DWT) and Compressive Sensing (CS). The original image is first transformed into DWT domain and divided into important part, that is, low-frequency part, and unimportant part, that is, high-frequency part. For low-frequency part contains the main information of image, traditional chaotic encryption is employed. Then, high-frequency part is encrypted with CS to vacate space for watermark. The scheme takes the processed original image content as watermark, from which the characteristic digest values are generated. Comparing with the existing image authentication algorithms, the proposed scheme can realize not only tamper detection and localization but also tamper recovery. Moreover, tamper recovery is based on block division and the recovery accuracy varies with the contents that are possibly tampered. If either the watermark or low-frequency part is tampered, the recovery accuracy is 100%. The experimental results show that the scheme can not only distinguish the type of tamper and find the tampered blocks but also recover the main information of the original image. With great robustness and security, the scheme can adequately meet the need of secure image transmission under unreliable conditions.
APA, Harvard, Vancouver, ISO, and other styles
19

Suantai, Suthep, Kunrada Kankam, Watcharaporn Cholamjiak, and Watcharaporn Yajai. "Parallel Hybrid Algorithms for a Finite Family of G-Nonexpansive Mappings and Its Application in a Novel Signal Recovery." Mathematics 10, no. 12 (June 20, 2022): 2140. http://dx.doi.org/10.3390/math10122140.

Full text
Abstract:
This article considers a parallel monotone hybrid algorithm for a finite family of G-nonexpansive mapping in Hilbert spaces endowed with graphs and suggests iterative schemes for finding a common fixed point by the two different hybrid projection methods. Moreover, we show the computational performance of our algorithm in comparison to some methods. Strong convergence theorems are proved under suitable conditions. Finally, we give some numerical experiments of our algorithms to show the efficiency and implementation of the LASSO problems in signal recovery with different types of blurred matrices and noise.
APA, Harvard, Vancouver, ISO, and other styles
20

Likassa, Habte Tadesse. "New Robust Principal Component Analysis for Joint Image Alignment and Recovery via Affine Transformations, Frobenius and L2,1 Norms." International Journal of Mathematics and Mathematical Sciences 2020 (April 10, 2020): 1–9. http://dx.doi.org/10.1155/2020/8136384.

Full text
Abstract:
This paper proposes an effective and robust method for image alignment and recovery on a set of linearly correlated data via Frobenius and L2,1 norms. The most popular and successful approach is to model the robust PCA problem as a low-rank matrix recovery problem in the presence of sparse corruption. The existing algorithms still lack in dealing with the potential impact of outliers and heavy sparse noises for image alignment and recovery. Thus, the new algorithm tackles the potential impact of outliers and heavy sparse noises via using novel ideas of affine transformations and Frobenius and L2,1 norms. To attain this, affine transformations and Frobenius and L2,1 norms are incorporated in the decomposition process. As such, the new algorithm is more resilient to errors, outliers, and occlusions. To solve the convex optimization involved, an alternating iterative process is also considered to alleviate the complexity. Conducted simulations on the recovery of face images and handwritten digits demonstrate the effectiveness of the new approach compared with the main state-of-the-art works.
APA, Harvard, Vancouver, ISO, and other styles
21

Row, Ter-Chan, Wei-Ming Syu, Yen-Liang Pan, and Ching-Cheng Wang. "One Novel and Optimal Deadlock Recovery Policy for Flexible Manufacturing Systems Using Iterative Control Transitions Strategy." Mathematical Problems in Engineering 2019 (March 27, 2019): 1–12. http://dx.doi.org/10.1155/2019/4847072.

Full text
Abstract:
This paper focuses on solving deadlock problems of flexible manufacturing systems (FMS) based on Petri nets theory. Precisely, one novel control transition technology is developed to solve FMS deadlock problem. This new proposed technology can not only identify the maximal saturated tokens of idle places in Petri net model (PNM) but also further reserve all original reachable markings whatever they are legal or illegal ones. In other words, once the saturated number of tokens in idle places is identified, the maximal markings of system reachability graph can then be checked. Two classical S3PR (the Systems of Simple Sequential Processes with Resources) examples are used to illustrate the proposed technology. Experimental results indicate that the proposed algorithm of control transition technology seems to be the best one among all existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
22

Teshima, Takeshi, Miao Xu, Issei Sato, and Masashi Sugiyama. "Clipped Matrix Completion: A Remedy for Ceiling Effects." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5151–58. http://dx.doi.org/10.1609/aaai.v33i01.33015151.

Full text
Abstract:
We consider the problem of recovering a low-rank matrix from its clipped observations. Clipping is conceivable in many scientific areas that obstructs statistical analyses. On the other hand, matrix completion (MC) methods can recover a low-rank matrix from various information deficits by using the principle of low-rank completion. However, the current theoretical guarantees for low-rank MC do not apply to clipped matrices, as the deficit depends on the underlying values. Therefore, the feasibility of clipped matrix completion (CMC) is not trivial. In this paper, we first provide a theoretical guarantee for the exact recovery of CMC by using a trace-norm minimization algorithm. Furthermore, we propose practical CMC algorithms by extending ordinary MC methods. Our extension is to use the squared hinge loss in place of the squared loss for reducing the penalty of overestimation on clipped entries. We also propose a novel regularization term tailored for CMC. It is a combination of two trace-norm terms, and we theoretically bound the recovery error under the regularization. We demonstrate the effectiveness of the proposed methods through experiments using both synthetic and benchmark data for recommendation systems.
APA, Harvard, Vancouver, ISO, and other styles
23

Alassafi, Madini O., Ishtiaq Rasool Khan, Rayed AlGhamdi, Wajid Aziz, Abdulrahman A. Alshdadi, Mohamed M. Dessouky, Adel Bahaddad, Ali Altalbe, and Nabeel Albishry. "Studying Dynamical Characteristics of Oxygen Saturation Variability Signals Using Haar Wavelet." Healthcare 11, no. 16 (August 13, 2023): 2280. http://dx.doi.org/10.3390/healthcare11162280.

Full text
Abstract:
An aim of the analysis of biomedical signals such as heart rate variability signals, brain signals, oxygen saturation variability (OSV) signals, etc., is for the design and development of tools to extract information about the underlying complexity of physiological systems, to detect physiological states, monitor health conditions over time, or predict pathological conditions. Entropy-based complexity measures are commonly used to quantify the complexity of biomedical signals; however novel complexity measures need to be explored in the context of biomedical signal classification. In this work, we present a novel technique that used Haar wavelets to analyze the complexity of OSV signals of subjects during COVID-19 infection and after recovery. The data used to evaluate the performance of the proposed algorithms comprised recordings of OSV signals from 44 COVID-19 patients during illness and after recovery. The performance of the proposed technique was compared with four, scale-based entropy measures: multiscale entropy (MSE); multiscale permutation entropy (MPE); multiscale fuzzy entropy (MFE); multiscale amplitude-aware permutation entropy (MAMPE). Preliminary results of the pilot study revealed that the proposed algorithm outperformed MSE, MPE, MFE, and MMAPE in terms of better accuracy and time efficiency for separating during and after recovery the OSV signals of COVID-19 subjects. Further studies are needed to evaluate the potential of the proposed algorithm for large datasets and in the context of other biomedical signal classifications.
APA, Harvard, Vancouver, ISO, and other styles
24

Guo, Jianzhong, Cong Cao, Dehui Shi, Jing Chen, Shuai Zhang, Xiaohu Huo, Dejin Kong, Jian Li, Yukang Tian, and Min Guo. "Matching Pursuit Algorithm for Decoding of Binary LDPC Codes." Wireless Communications and Mobile Computing 2021 (October 31, 2021): 1–5. http://dx.doi.org/10.1155/2021/9980774.

Full text
Abstract:
This paper presents a novel hard decision decoding algorithm for low-density parity-check (LDPC) codes, in which the stand matching pursuit (MP) is adapted for error pattern recovery from syndrome over GF(2). In this algorithm, the operation of inner product can be converted into XOR and accumulation, which makes the matching pursuit work with a high efficiency. In addition, the maximum iteration is theoretically explored in relation to sparsity and error probability according to the sparse theory. To evaluate the proposed algorithm, two MP-based decoding algorithms are simulated and compared over an AWGN channel, i.e., generic MP (GMP) and syndrome MP (SMP). Simulation results show that the GMP algorithm outperforms the SMP by 0.8 dB at BER = 10 − 5 .
APA, Harvard, Vancouver, ISO, and other styles
25

Wu, Tianjun, Yuexiang Yang, Chi Wang, and Rui Wang. "Study on Massive-Scale Slow-Hash Recovery Using Unified Probabilistic Context-Free Grammar and Symmetrical Collaborative Prioritization with Parallel Machines." Symmetry 11, no. 4 (April 1, 2019): 450. http://dx.doi.org/10.3390/sym11040450.

Full text
Abstract:
Slow-hash algorithms are proposed to defend against traditional offline password recovery by making the hash function very slow to compute. In this paper, we study the problem of slow-hash recovery on a large scale. We attack the problem by proposing a novel concurrent model that guesses the target password hash by leveraging known passwords from a largest-ever password corpus. Previously proposed password-reused learning models are specifically designed for targeted online guessing for a single hash and thus cannot be efficiently parallelized for massive-scale offline recovery, which is demanded by modern hash-cracking tasks. In particular, because the size of a probabilistic context-free grammar (PCFG for short) model is non-trivial and keeping track of the next most probable password to guess across all global accounts is difficult, we choose clever data structures and only expand transformations as needed to make the attack computationally tractable. Our adoption of max-min heap, which globally ranks weak accounts for both expanding and guessing according to unified PCFGs and allows for concurrent global ranking, significantly increases the hashes can be recovered within limited time. For example, 59.1% accounts in one of our target password list can be found in our source corpus, allowing our solution to recover 20.1% accounts within one week at an average speed of 7200 non-identical passwords cracked per hour, compared to previous solutions such as oclHashcat (using default configuration), which cracks at an average speed of 28 and needs months to recover the same number of accounts with equal computing resources (thus are infeasible for a real-world attacker who would maximize the gain against the cracking cost). This implies an underestimated threat to slow-hash protected password dumps. Our method provides organizations with a better model of offline attackers and helps them better decide the hashing costs of slow-hash algorithms and detect potential vulnerable credentials before hackers do.
APA, Harvard, Vancouver, ISO, and other styles
26

Mutny, Mojmir, Johannes Kirschner, and Andreas Krause. "Experimental Design for Optimization of Orthogonal Projection Pursuit Models." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (April 3, 2020): 10235–42. http://dx.doi.org/10.1609/aaai.v34i06.6585.

Full text
Abstract:
Bayesian optimization and kernelized bandit algorithms are widely used techniques for sequential black box function optimization with applications in parameter tuning, control, robotics among many others. To be effective in high dimensional settings, previous approaches make additional assumptions, for example on low-dimensional subspaces or an additive structure. In this work, we go beyond the additivity assumption and use an orthogonal projection pursuit regression model, which strictly generalizes additive models. We present a two-stage algorithm motivated by experimental design to first decorrelate the additive components. Subsequently, the bandit optimization benefits from the statistically efficient additive model. Our method provably decorrelates the fully additive model and achieves optimal sublinear simple regret in terms of the number of function evaluations. To prove the rotation recovery, we derive novel concentration inequalities for linear regression on subspaces. In addition, we specifically address the issue of acquisition function optimization and present two domain dependent efficient algorithms. We validate the algorithm numerically on synthetic as well as real-world optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhou, Mingliang, Qin Mao, Chen Zhong, Weiqin Zhang, and Changzhi Chen. "Spatial Error Concealment by Jointing Gauss Bayes Model and SVD for High Efficiency Video Coding." International Journal of Pattern Recognition and Artificial Intelligence 33, no. 14 (May 15, 2019): 1954037. http://dx.doi.org/10.1142/s0218001419540375.

Full text
Abstract:
This paper proposes a novel sparsity-based error concealment (EC) algorithm which integrates the Gauss Bayes model and singular value decomposition for high efficiency video coding (HEVC). Under the sequential recovery framework, pixels in missing blocks are successively reconstructed in Gauss Bayes mode. We find that the estimation error follows the Gaussian distribution in HEVC, so the error pixel estimation problem can be transferred to a Bayesian estimation. We utilize the singular value decomposition (SVD) technique to select sample pixels, which yields high estimation accuracy and reduces estimation error. A new recovery order based on confidence is established to resolve the error propagation problem. Compared to other state-of-the-art EC algorithms, experimental results show that the proposed method gives better reconstruction performance in terms of objective and subjective evaluations. It also has significantly lower complexity.
APA, Harvard, Vancouver, ISO, and other styles
28

A, Kadar A., Parameshwaran Ramalingam, Rithanathith S, Gurugubelli Vasudeva Rao, and Lakshminarayanan G. "FPGA Implementation of Adaptive Sampling Algorithm for Space Applications." ECS Transactions 107, no. 1 (April 24, 2022): 5839–46. http://dx.doi.org/10.1149/10701.5839ecst.

Full text
Abstract:
Adaptive sampling is a signal processing technique used in various aerospace applications. Many adaptive algorithms used for instrumentation and telemetry systems process the signal in the frequency domain, which leads to high computational cost and power. ASA-m solves this problem by performing all operations in the time domain. It estimated the subsequent sampling frequency to collect meaningful information based on mean velocity prediction. This novel algorithm is implemented on the Spartan 3E FPGA board to study the device power and hardware utilization for real-time vibration signal datasets. The significant recovery of data with a lesser number of samples and lower hardware utilization for the state of art algorithm ASA-m is brought out in this paper.
APA, Harvard, Vancouver, ISO, and other styles
29

Mukhoty, Bhaskar, Debojyoti Dey, and Purushottam Kar. "Corruption-Tolerant Algorithms for Generalized Linear Models." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 9243–50. http://dx.doi.org/10.1609/aaai.v37i8.26108.

Full text
Abstract:
This paper presents SVAM (Sequential Variance-Altered MLE), a unified framework for learning generalized linear models under adversarial label corruption in training data. SVAM extends to tasks such as least squares regression, logistic regression, and gamma regression, whereas many existing works on learning with label corruptions focus only on least squares regression. SVAM is based on a novel variance reduction technique that may be of independent interest and works by iteratively solving weighted MLEs over variance-altered versions of the GLM objective. SVAM offers provable model recovery guarantees superior to the state-of-the-art for robust regression even when a constant fraction of training labels are adversarially corrupted. SVAM also empirically outperforms several existing problem-specific techniques for robust regression and classification. Code for SVAM is available at https://github.com/purushottamkar/svam/
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Yang, Chaoyue Chen, Wei Huang, Yangfan Cheng, Yuen Teng, Lei Zhang, and Jianguo Xu. "Machine Learning-Based Radiomics of the Optic Chiasm Predict Visual Outcome Following Pituitary Adenoma Surgery." Journal of Personalized Medicine 11, no. 10 (September 30, 2021): 991. http://dx.doi.org/10.3390/jpm11100991.

Full text
Abstract:
Preoperative prediction of visual recovery after pituitary adenoma surgery remains a challenge. We aimed to investigate the value of MRI-based radiomics of the optic chiasm in predicting postoperative visual field outcome using machine learning technology. A total of 131 pituitary adenoma patients were retrospectively enrolled and divided into the recovery group (N = 79) and the non-recovery group (N = 52) according to visual field outcome following surgical chiasmal decompression. Radiomic features were extracted from the optic chiasm on preoperative coronal T2-weighted imaging. Least absolute shrinkage and selection operator regression were first used to select optimal features. Then, three machine learning algorithms were employed to develop radiomic models to predict visual recovery, including support vector machine (SVM), random forest and linear discriminant analysis. The prognostic performances of models were evaluated via five-fold cross-validation. The results showed that radiomic models using different machine learning algorithms all achieved area under the curve (AUC) over 0.750. The SVM-based model represented the best predictive performance for visual field recovery, with the highest AUC of 0.824. In conclusion, machine learning-based radiomics of the optic chiasm on routine MR imaging could potentially serve as a novel approach to preoperatively predict visual recovery and allow personalized counseling for individual pituitary adenoma patients.
APA, Harvard, Vancouver, ISO, and other styles
31

Dodd, Peter J., Jeff J. Pennington, Liza Bronner Murrison, and David W. Dowdy. "Simple Inclusion of Complex Diagnostic Algorithms in Infectious Disease Models for Economic Evaluation." Medical Decision Making 38, no. 8 (November 2018): 930–41. http://dx.doi.org/10.1177/0272989x18807438.

Full text
Abstract:
Introduction. Cost-effectiveness models for infectious disease interventions often require transmission models that capture the indirect benefits from averted subsequent infections. Compartmental models based on ordinary differential equations are commonly used in this context. Decision trees are frequently used in cost-effectiveness modeling and are well suited to describing diagnostic algorithms. However, complex decision trees are laborious to specify as compartmental models and cumbersome to adapt, limiting the detail of algorithms typically included in transmission models. Methods. We consider an approximation replacing a decision tree with a single holding state for systems where the time scale of the diagnostic algorithm is shorter than time scales associated with disease progression or transmission. We describe recursive algorithms for calculating the outcomes and mean costs and delays associated with decision trees, as well as design strategies for computational implementation. We assess the performance of the approximation in a simple model of transmission/diagnosis and its role in simplifying a model of tuberculosis diagnostics. Results. When diagnostic delays were short relative to recovery rates, our approximation provided a good account of infection dynamics and the cumulative costs of diagnosis and treatment. Proportional errors were below 5% so long as the longest delay in our 2-step algorithm was under 20% of the recovery time scale. Specifying new diagnostic algorithms in our tuberculosis model was reduced from several tens to just a few lines of code. Discussion. For conditions characterized by a diagnostic process that is neither instantaneous nor protracted (relative to transmission dynamics), this novel approach retains the advantages of decision trees while embedding them in more complex models of disease transmission. Concise specification and code reuse increase transparency and reduce potential for error.
APA, Harvard, Vancouver, ISO, and other styles
32

Xu, Huihui, and Fei Li. "Multilevel Pyramid Network for Monocular Depth Estimation Based on Feature Refinement and Adaptive Fusion." Electronics 11, no. 16 (August 20, 2022): 2615. http://dx.doi.org/10.3390/electronics11162615.

Full text
Abstract:
As a traditional computer vision task, monocular depth estimation plays an essential role in novel view 3D reconstruction and augmented reality. Convolutional neural network (CNN)-based models have achieved good performance for this task. However, in the depth map recovered by some existing deep learning-based methods, local details are still lost. To generate convincing depth maps with rich local details, this study proposes an efficient multilevel pyramid network for monocular depth estimation based on feature refinement and adaptive fusion. Specifically, a multilevel spatial feature generation scheme is developed to extract rich features from the spatial branch. Then, a feature refinement module that combines and enhances these multilevel contextual and spatial information is designed to derive detailed information. In addition, we design an adaptive fusion block for improving the capability of fully connected features. The performance evaluation results on public RGBD datasets indicate that the proposed approach can recover reasonable depth outputs with better details and outperform several depth recovery algorithms from a qualitative and quantitative perspective.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhao, Shengjie, Jianchen Zhu, and Di Wu. "Design and Application of a Greedy Pursuit Algorithm Adapted to Overcomplete Dictionary for Sparse Signal Recovery." Traitement du Signal 37, no. 5 (November 25, 2020): 723–32. http://dx.doi.org/10.18280/ts.370504.

Full text
Abstract:
Compressive sensing (CS) is a novel paradigm to recover a sparse signal in compressed domain. In some overcomplete dictionaries, most practical signals are sparse rather than orthonormal. Signal space greedy method can derive the optimal or near-optimal projections, making it possible to identify a few most relevant dictionary atoms of an arbitrary signal. More practically, such projections can be processed by standard CS recovery algorithms. This paper proposes a signal space subspace pursuit (SSSP) method to compute spare signal representations with overcomplete dictionaries, whenever the sensing matrix satisfies the restricted isometry property adapted to dictionary (D-RIP). Specifically, theoretical guarantees were provided to recover the signals from their measurements with overwhelming probability, as long as the sensing matrix satisfies the D-RIP. In addition, a thorough analysis was performed to minimize the number of measurements required for such guarantees. Simulation results demonstrate the validity of our hypothetical theory, as well as the superiority of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
34

Haider, Hassaan, Jawad Ali Shah, Kushsairy Kadir, and Najeeb Khan. "Sparse Reconstruction Using Hyperbolic Tangent as Smooth l1-Norm Approximation." Computation 11, no. 1 (January 4, 2023): 7. http://dx.doi.org/10.3390/computation11010007.

Full text
Abstract:
In the Compressed Sensing (CS) framework, the underdetermined system of linear equation (USLE) can have infinitely many possible solutions. However, we intend to find the sparsest possible solution, which is -norm minimization. However, finding an norm solution out of infinitely many possible solutions is NP-hard problem that becomes non-convex optimization problem. It has been a practically proven fact that norm penalty can be adequately estimated by norm, which recasts a non-convex minimization problem to a convex problem. However, norm non-differentiable and gradient-based minimization algorithms are not applicable, due to this very reason there is a need to approximate norm by its smooth approximation. Iterative shrinkage algorithms provide an efficient method to numerically minimize -regularized least square optimization problem. These algorithms are required to induce sparsity in their solutions to meet the CS recovery requirement. In this research article, we have developed a novel recovery method that uses hyperbolic tangent function to recover undersampled signal/images in CS framework. In our work, norm and soft thresholding are both approximated with the hyperbolic tangent functions. We have also proposed the criteria to tune optimization parameters to get optimal results. The error bounds for the proposed norm approximation are evaluated. To evaluate performance of our proposed method, we have utilized a dataset comprised of 1-D sparse signal, compressively sampled MR image and cardiac cine MRI. The MRI is an important imaging modality for assessing cardiac vascular function. It provides the ejection fraction and cardiac output of the heart. However, this advantage comes at the cost of a slow acquisition process. Hence, it is essential to speed up the acquisition process to take the full benefits of cardiac cine MRI. Numerical results based on performance metrics, such as Structural Similarity (SSIM), Peak Signal to Noise Ratio (PSNR) and Root Mean Square Error (RMSE) show that the proposed tangent hyperbolic based CS recovery offers a much better performance as compared to the traditional Iterative Soft Thresholding (IST) recovery methods.
APA, Harvard, Vancouver, ISO, and other styles
35

Xu, Renjie, Ting Yun, Lin Cao, and Yunfei Liu. "Compression and Recovery of 3D Broad-Leaved Tree Point Clouds Based on Compressed Sensing." Forests 11, no. 3 (February 26, 2020): 257. http://dx.doi.org/10.3390/f11030257.

Full text
Abstract:
The terrestrial laser scanner (TLS) has been widely used in forest inventories. However, with increasing precision of TLS, storing and transmitting tree point clouds become more challenging. In this paper, a novel compressed sensing (CS) scheme for broad-leaved tree point clouds is proposed by analyzing and comparing different sparse bases, observation matrices, and reconstruction algorithms. Our scheme starts by eliminating outliers and simplifying point clouds with statistical filtering and voxel filtering. The scheme then applies Haar sparse basis to thin the coordinate data based on the characteristics of the broad-leaved tree point clouds. An observation procedure down-samples the point clouds with the partial Fourier matrix. The regularized orthogonal matching pursuit algorithm (ROMP) finally reconstructs the original point clouds. The experimental results illustrate that the proposed scheme can preserve morphological attributes of the broad-leaved tree within a range of relative error: 0.0010%–3.3937%, and robustly extend to plot-level within a range of mean square error (MSE): 0.0063–0.2245.
APA, Harvard, Vancouver, ISO, and other styles
36

Gürses, Dildar, Pranav Mehta, Sadiq M. Sait, and Ali Riza Yildiz. "African vultures optimization algorithm for optimization of shell and tube heat exchangers." Materials Testing 64, no. 8 (August 1, 2022): 1234–41. http://dx.doi.org/10.1515/mt-2022-0050.

Full text
Abstract:
Abstract Nature-inspired optimization algorithms named meta-heuristics are found to be versatile in engineering design fields. Their adaptability is also used in various areas of the Internet of things, structural design, and thermal system design. With the very rapid progress in industrial modernization, waste heat recovery from the power generating and thermal engineering organization is an imperative key point to reduce the emission and support the government norms. However, the heat exchanger is the component applied in various heat recovery processes. Out of the available designs, shell and tube heat exchangers (SHTHEs) are the most commonly adopted for the heat recovery process. Hence, cost minimization is the major aspect while designing the heat exchanger confirming various constraints and optimized design variables. In this study, cost minimization of the SHTHE is performed by applying a novel metaheuristic algorithm which is the African vultures optimization algorithm (AVOA). Adopting the AVOA for the best-optimized value (least cost of heat exchanger) and the design parameters are realized, confirming all the constraints. It was found that the AVOA is able to pursue the best results among the rest of them and can be used for the cost optimization of the plate-fin and tube-fin heat exchanger case studies.
APA, Harvard, Vancouver, ISO, and other styles
37

Okuboyejo, Damilola A., and Oludayo O. Olugbara. "Classification of Skin Lesions Using Weighted Majority Voting Ensemble Deep Learning." Algorithms 15, no. 12 (November 24, 2022): 443. http://dx.doi.org/10.3390/a15120443.

Full text
Abstract:
The conventional dermatology practice of performing noninvasive screening tests to detect skin diseases is a source of escapable diagnostic inaccuracies. Literature suggests that automated diagnosis is essential for improving diagnostic accuracies in medical fields such as dermatology, mammography, and colonography. Classification is an essential component of an assisted automation process that is rapidly gaining attention in the discipline of artificial intelligence for successful diagnosis, treatment, and recovery of patients. However, classifying skin lesions into multiple classes is challenging for most machine learning algorithms, especially for extremely imbalanced training datasets. This study proposes a novel ensemble deep learning algorithm based on the residual network with the next dimension and the dual path network with confidence preservation to improve the classification performance of skin lesions. The distributed computing paradigm was applied in the proposed algorithm to speed up the inference process by a factor of 0.25 for a faster classification of skin lesions. The algorithm was experimentally compared with 16 deep learning and 12 ensemble deep learning algorithms to establish its discriminating prowess. The experimental comparison was based on dermoscopic images congregated from the publicly available international skin imaging collaboration databases. We propitiously recorded up to 82.52% average sensitivity, 99.00% average specificity, 98.54% average balanced accuracy, and 92.84% multiclass accuracy without prior segmentation of skin lesions to outstrip numerous state-of-the-art deep learning algorithms investigated.
APA, Harvard, Vancouver, ISO, and other styles
38

Liu, Chunsheng, Hong Shan, and Bin Wang. "Wireless Sensor Network Localization via Matrix Completion Based on Bregman Divergence." Sensors 18, no. 9 (September 6, 2018): 2974. http://dx.doi.org/10.3390/s18092974.

Full text
Abstract:
One of the main challenges faced by wireless sensor network (WSN) localization is the positioning accuracy of the WSN node. The existing algorithms are arduous to use for dealing with the pulse noise that is universal and ineluctable in practical considerations, resulting in lower positioning accuracy. Aimed at this problem and introducing Bregman divergence, we propose in this paper a novel WSN localization algorithm via matrix completion (LBDMC). Based on the natural low-rank character of the Euclidean Distance Matrix (EDM), the problem of EDM recovery is formulated as an issue of matrix completion in a noisy environment. A regularized matrix completion model is established, smoothing the pulse noise by leveraging L 1 , 2 -norm and the multivariate function Bregman divergence is defined to solve the model to obtain the EDM estimator. Furthermore, node localization is available based on the multi-dimensional scaling (MDS) method. Multi-faceted comparison experiments with existing algorithms, under a variety of noise conditions, demonstrate the superiority of LBDMC to other algorithms regarding positioning accuracy and robustness, while ensuring high efficiency. Notably, the mean localization error of LBDMC is about ten times smaller than that of other algorithms when the sampling rate reaches a certain level, such as >30%.
APA, Harvard, Vancouver, ISO, and other styles
39

Krucoff, Max O., Katie Zhuang, David MacLeod, Allen Yin, Yoon Woo Byun, Roberto Jose Manson, Dennis A. Turner, Laura Oliveira, and Mikhail A. Lebedev. "A novel paraplegia model in awake behaving macaques." Journal of Neurophysiology 118, no. 3 (September 1, 2017): 1800–1808. http://dx.doi.org/10.1152/jn.00327.2017.

Full text
Abstract:
Lower limb paralysis from spinal cord injury (SCI) or neurological disease carries a poor prognosis for recovery and remains a large societal burden. Neurophysiological and neuroprosthetic research have the potential to improve quality of life for these patients; however, the lack of an ethical and sustainable nonhuman primate model for paraplegia hinders their advancement. Therefore, our multidisciplinary team developed a way to induce temporary paralysis in awake behaving macaques by creating a fully implantable lumbar epidural catheter-subcutaneous port system that enables easy and reliable targeted drug delivery for sensorimotor blockade. During treadmill walking, aliquots of 1.5% lidocaine with 1:200,000 epinephrine were percutaneously injected into the ports of three rhesus macaques while surface electromyography (EMG) recorded muscle activity from their quadriceps and gastrocnemii. Diminution of EMG amplitude, loss of voluntary leg movement, and inability to bear weight were achieved for 60–90 min in each animal, followed by a complete recovery of function. The monkeys remained alert and cooperative during the paralysis trials and continued to take food rewards, and the ports remained functional after several months. This technique will enable recording from the cortex and/or spinal cord in awake behaving nonhuman primates during the onset, maintenance, and resolution of paraplegia for the first time, thus opening the door to answering basic neurophysiological questions about the acute neurological response to spinal cord injury and recovery. It will also negate the need to permanently injure otherwise high-value research animals for certain experimental paradigms aimed at developing and testing neural interface decoding algorithms for patients with lower extremity dysfunction. NEW & NOTEWORTHY A novel implantable lumbar epidural catheter-subcutaneous port system enables targeted drug delivery and induction of temporary paraplegia in awake, behaving nonhuman primates. Three macaques displayed loss of voluntary leg movement for 60–90 min after injection of lidocaine with epinephrine, followed by a full recovery. This technique for the first time will enable ethical live recording from the proximal central nervous system during the acute onset, maintenance, and resolution of paraplegia.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhao, Li Ye. "Method Based on Wavelet and Empirical Mode Decomposition for Extracting the Gravity Signal." Applied Mechanics and Materials 668-669 (October 2014): 1076–80. http://dx.doi.org/10.4028/www.scientific.net/amm.668-669.1076.

Full text
Abstract:
The measurement data of the marine gravity contains a lot of noise, the low frequency part of which have a similar frequency with the gravity signal. It’s difficult to inhibit the noise of the measurement data and extract the gravity signal by classical algorithms. Therefore, in order to effectively eliminate the noise of the measurement gravity data and improve the accuracy of the extracted signal, based on algorithms of wavelet and Empirical Mode Decomposition (EMD), a novel method to extract the sea gravity anomaly signal is proposed. Firstly, the measurement gravity signal is decomposed into detail signals and approximate signals. Secondly, the algorithm EMD is used to extract the low-frequency part of the decomposed signals, and the estimation of the gravity anomaly is reconstructed by inverse wavelet transform. The de-noising experiment has been emulated based on the measurement gravity data. Results of theoretical analysis and emulation experiments indicate that the proposed method can effectively eliminate the noise of the measurement gravity and recovery the wave form of gravity signal, the accuracy of the signal can be approximately increased 40% than classical algorithms.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhao, Huihuang, Jianzhen Chen, Shibiao Xu, Ying Wang, and Zhijun Qiao. "Compressive sensing for noisy solder joint imagery based on convex optimization." Soldering & Surface Mount Technology 28, no. 2 (April 4, 2016): 114–22. http://dx.doi.org/10.1108/ssmt-09-2014-0017.

Full text
Abstract:
Purpose The purpose of this paper is to develop a compressive sensing (CS) algorithm for noisy solder joint imagery compression and recovery. A fast gradient-based compressive sensing (FGbCS) approach is proposed based on the convex optimization. The proposed algorithm is able to improve performance in terms of peak signal noise ratio (PSNR) and computational cost. Design/methodology/approach Unlike traditional CS methods, the authors first transformed a noise solder joint image to a sparse signal by a discrete cosine transform (DCT), so that the reconstruction of noisy solder joint imagery is changed to a convex optimization problem. Then, a so-called gradient-based method is utilized for solving the problem. To improve the method efficiency, the authors assume the problem to be convex with the Lipschitz gradient through the replacement of an iteration parameter by the Lipschitz constant. Moreover, a FGbCS algorithm is proposed to recover the noisy solder joint imagery under different parameters. Findings Experiments reveal that the proposed algorithm can achieve better results on PNSR with fewer computational costs than classical algorithms like Orthogonal Matching Pursuit (OMP), Greedy Basis Pursuit (GBP), Subspace Pursuit (SP), Compressive Sampling Matching Pursuit (CoSaMP) and Iterative Re-weighted Least Squares (IRLS). Convergence of the proposed algorithm is with a faster rate O(k*k) instead of O(1/k). Practical implications This paper provides a novel methodology for the CS of noisy solder joint imagery, and the proposed algorithm can also be used in other imagery compression and recovery. Originality/value According to the CS theory, a sparse or compressible signal can be represented by a fewer number of bases than those required by the Nyquist theorem. The new development might provide some fundamental guidelines for noisy imagery compression and recovering.
APA, Harvard, Vancouver, ISO, and other styles
42

Chen, Tao, Huanxin Wu, and Lutao Liu. "A Joint Doppler Frequency Shift and DOA Estimation Algorithm Based on Sparse Representations for Colocated TDM-MIMO Radar." Journal of Applied Mathematics 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/421391.

Full text
Abstract:
We address the problem of a new joint Doppler frequency shift (DFS) and direction of arrival (DOA) estimation for colocated TDM-MIMO radar that is a novel technology applied to autocruise and safety driving system in recent years. The signal model of colocated TDM-MIMO radar with few transmitter or receiver channels is depicted and “time varying steering vector” model is proved. Inspired by sparse representations theory, we present a new processing scheme for joint DFS and DOA estimation based on the new input signal model of colocated TDM-MIMO radar. An ultracomplete redundancy dictionary for angle-frequency space is founded in order to complete sparse representations of the input signal. The SVD-SR algorithm which stands for joint estimation based on sparse representations using SVD decomposition with OMP algorithm and the improved M-FOCUSS algorithm which combines the classical M-FOCUSS with joint sparse recovery spectrum are applied to the new signal model’s calculation to solve the multiple measurement vectors (MMV) problem. The improved M-FOCUSS algorithm can work more robust than SVD-SR and JS-SR algorithms in the aspects of coherent signals resolution and estimation accuracy. Finally, simulation experiments have shown that the proposed algorithms and schemes are feasible and can be further applied to practical application.
APA, Harvard, Vancouver, ISO, and other styles
43

Shin, Jongho, Youngmi Baek, Jaeseong Lee, and Seonghun Lee. "Cyber-Physical Attack Detection and Recovery Based on RNN in Automotive Brake Systems." Applied Sciences 9, no. 1 (December 26, 2018): 82. http://dx.doi.org/10.3390/app9010082.

Full text
Abstract:
The violation of data integrity in automotive Cyber-Physical Systems (CPS) may lead to dangerous situations for drivers and pedestrians in terms of safety. In particular, cyber-attacks on the sensor could easily degrade data accuracy and consistency over any other attack, we investigate attack detection and identification based on a deep learning technology on wheel speed sensors of automotive CPS. For faster recovery of a physical system with detection of the cyber-attacks, estimation of a specific value is conducted to substitute false data. To the best of our knowledge, there has not been a case of joining sensor attack detection and vehicle speed estimation in existing literature. In this work, we design a novel method to combine attack detection and identification, vehicle speed estimation of wheel speed sensors to improve the safety of CPS even under the attacks. First, we define states of the sensors based on the cases of attacks that can occur in the sensors. Second, Recurrent Neural Network (RNN) is applied to detect and identify wheel speed sensor attacks. Third, in order to estimate the vehicle speeds accurately, we employ Weighted Average (WA), as one of the fusion algorithms, in order to assign a different weight to each sensor. Since environment uncertainty while driving has an impact on different characteristics of vehicles and causes performance degradation, the recovery mechanism needs the ability adaptive to changing environments. Therefore, we estimate the vehicle speeds after assigning a different weight to each sensor depending on driving situations classified by analyzing driving data. Experiments including training, validation, and test are carried out with actual measurements obtained while driving on the real road. In case of the fault detection and identification, classification accuracy is evaluated. Mean Squared Error (MSE) is calculated to verify that the speed is estimated accurately. The classification accuracy about test additive attack data is 99.4978%. MSE of our proposed speed estimation algorithm is 1.7786. It is about 0.2 lower than MSEs of other algorithms. We demonstrate that our system maintains data integrity well and is safe relatively in comparison with systems which apply other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
44

Tan, Huachun, Jianshuai Feng, Guangdong Feng, Wuhong Wang, and Yu-Jin Zhang. "Traffic Volume Data Outlier Recovery via Tensor Model." Mathematical Problems in Engineering 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/164810.

Full text
Abstract:
Traffic volume data is already collected and used for a variety of purposes in intelligent transportation system (ITS). However, the collected data might be abnormal due to the problem of outlier data caused by malfunctions in data collection and record systems. To fully analyze and operate the collected data, it is necessary to develop a validate method for addressing the outlier data. Many existing algorithms have studied the problem of outlier recovery based on the time series methods. In this paper, a multiway tensor model is proposed for constructing the traffic volume data based on the intrinsic multilinear correlations, such as day to day and hour to hour. Then, a novel tensor recovery method, called ADMM-TR, is proposed for recovering outlier data of traffic volume data. The proposed method is evaluated on synthetic data and real world traffic volume data. Experimental results demonstrate the practicability, effectiveness, and advantage of the proposed method, especially for the real world traffic volume data.
APA, Harvard, Vancouver, ISO, and other styles
45

Xie, Zhonghua, Lihong Ma, and Lingjun Liu. "Content-Aware Compressive Sensing Recovery Using Laplacian Scale Mixture Priors and Side Information." Mathematical Problems in Engineering 2018 (2018): 1–15. http://dx.doi.org/10.1155/2018/7171352.

Full text
Abstract:
Nonlocal methods have shown great potential in many image restoration tasks including compressive sensing (CS) reconstruction through use of image self-similarity prior. However, they are still limited in recovering fine-scale details and sharp features, when rich repetitive patterns cannot be guaranteed; moreover the CS measurements are corrupted. In this paper, we propose a novel CS recovery algorithm that combines nonlocal sparsity with local and global prior, which soften and complement the self-similarity assumption for irregular structures. First, a Laplacian scale mixture (LSM) prior is utilized to model dependencies among similar patches. For achieving group sparsity, each singular value of similar packed patches is modeled as a Laplacian distribution with a variable scale parameter. Second, a global prior and a compensation-based sparsity prior of local patch are designed in order to maintain differences between packed patches. The former refers to a prediction which integrates the information at the independent processing stage and is used as side information, while the latter enforces a small (i.e., sparse) prediction error and is also modeled with the LSM model so as to obtain local sparsity. Afterward, we derive an efficient algorithm based on the expectation-maximization (EM) and approximate message passing (AMP) frame for the maximum a posteriori (MAP) estimation of the sparse coefficients. Numerical experiments show that the proposed method outperforms many CS recovery algorithms.
APA, Harvard, Vancouver, ISO, and other styles
46

Hamidi, Hodjat, Abbas Vafaei, and Seyed Amir Hassan Monadjemi. "Analysis and Evaluation of a New Algorithm Based Fault Tolerance for Computing Systems." International Journal of Grid and High Performance Computing 4, no. 1 (January 2012): 37–51. http://dx.doi.org/10.4018/jghpc.2012010103.

Full text
Abstract:
In this paper, the authors present a new approach to algorithm based fault tolerance (ABFT) for High Performance computing system. The Algorithm Based Fault Tolerance approach transforms a system that does not tolerate a specific type of fault, called the fault-intolerant system, to a system that provides a specific level of fault tolerance, namely recovery. The ABFT techniques that detect errors rely on the comparison of parity values computed in two ways, the parallel processing of input parity values produce output parity values comparable with parity values regenerated from the original processed outputs, can apply convolution codes for the redundancy. This method is a new approach to concurrent error correction in fault-tolerant computing systems. This paper proposes a novel computing paradigm to provide fault tolerance for numerical algorithms. The authors also present, implement, and evaluate early detection in ABFT.
APA, Harvard, Vancouver, ISO, and other styles
47

Xie, Dong, Lixiang Li, Xinxin Niu, and Yixian Yang. "Identification of Coupled Map Lattice Based on Compressed Sensing." Mathematical Problems in Engineering 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/6435320.

Full text
Abstract:
A novel approach for the parameter identification of coupled map lattice (CML) based on compressed sensing is presented in this paper. We establish a meaningful connection between these two seemingly unrelated study topics and identify the weighted parameters using the relevant recovery algorithms in compressed sensing. Specifically, we first transform the parameter identification problem of CML into the sparse recovery problem of underdetermined linear system. In fact, compressed sensing provides a feasible method to solve underdetermined linear system if the sensing matrix satisfies some suitable conditions, such as restricted isometry property (RIP) and mutual coherence. Then we give a low bound on the mutual coherence of the coefficient matrix generated by the observed values of CML and also prove that it satisfies the RIP from a theoretical point of view. If the weighted vector of each element is sparse in the CML system, our proposed approach can recover all the weighted parameters using only aboutMsamplings, which is far less than the number of the lattice elementsN. Another important and significant advantage is that if the observed data are contaminated with some types of noises, our approach is still effective. In the simulations, we mainly show the effects of coupling parameter and noise on the recovery rate.
APA, Harvard, Vancouver, ISO, and other styles
48

Sheng, Siyuan, Qun Huang, Sa Wang, and Yungang Bao. "PR-sketch." Proceedings of the VLDB Endowment 14, no. 10 (June 2021): 1783–96. http://dx.doi.org/10.14778/3467861.3467868.

Full text
Abstract:
Computing per-key aggregation is indispensable in streaming data analysis formulated as two phases, an update phase and a recovery phase. As the size and speed of data streams rise, accurate per-key information is useful in many applications like anomaly detection, attack prevention, and online diagnosis. Even though many algorithms have been proposed for per-key aggregation in stream processing, their accuracy guarantees only cover a small portion of keys. In this paper, we aim to achieve nearly full accuracy with limited resource usage. We follow the line of sketch-based techniques. We observe that existing methods suffer from high errors for most keys. The reason is that they track keys by complicated mechanism in the update phase and simply calculate per-key aggregation from some specific counter in the recovery phase. Therefore, we present PR-Sketch, a novel sketching design to address the two limitations. PR-Sketch builds linear equations between counter values and per-key aggregations to improve accuracy, and records keys in the recovery phase to reduce resource usage in the update phase. We also provide an extension called fast PR-Sketch to improve processing rate further. We derive space complexity, time complexity, and guaranteed error probability for both PR-Sketch and fast PR-Sketch. We conduct trace-driven experiments under 100K keys and 1M items to compare our algorithms with multiple state-of-the-art methods. Results demonstrate the resource efficiency and nearly full accuracy of our algorithms.
APA, Harvard, Vancouver, ISO, and other styles
49

Yogi, Manas Kumar, and Dwarampudi Aiswarya. "A Comprehensive Review -Application of Bio-inspired Algorithms for Cyber Threat Intelligence Framework." Recent Research Reviews Journal 2, no. 1 (June 2023): 101–11. http://dx.doi.org/10.36548/rrrj.2023.1.08.

Full text
Abstract:
In most of the modern-day computing systems, security enhancements are a part of security design. The majority of the effort in providing robust security to a system is involved in the identification of cyber threats and how to recover from such cyberattacks. Many researchers have proposed sub-optimal strategies which has been the motivation of this research. This study summarises the research gaps and proposes research direction for mitigating the challenges concerned in that direction. This work reviews the current methodologies to provide a framework which can auto identify cyber threats and to determine how the bio-inspired algorithms can be applied to minimize the effort involved in identification and recovery from cyberattacks. Cyber threat intelligence frameworks serve as crucial elements in providing secure operating environment for the cyber practitioners. The design and development of cyber threat intelligence framework is challenging not only for the cost and effort involved in it but also due to intrinsic dependent entities of cyber security. This study proposes novel principles for bridging the identified research gaps through feature engineering, trust computing base, and bio-inspired based time optimization. There is a lot of research potential in this direction and this study is a sincere and ideal attempt towards the same goal.
APA, Harvard, Vancouver, ISO, and other styles
50

Sun, Mengjiang, and Peng Chen. "Neural Network Based AMP Method for Multi-User Detection in Massive Machine-Type Communication." Electronics 9, no. 8 (August 11, 2020): 1286. http://dx.doi.org/10.3390/electronics9081286.

Full text
Abstract:
In massive machine-type communications (mMTC) scenarios, grant-free non-orthogonal multiple access becomes crucial due to the small transmission latency, limited signaling overhead and the ability to support massive connectivity. In a multi-user detection (MUD) problem, the base station (BS) is unaware of the active users and needs to detect active devices. With sporadic devices transmitting signals at any moment, the MUD problem can be formulated as a multiple measurement vector (MMV) sparse recovery problem. Through the Khatri–Rao product, we prove that the MMV problem is transformed into a single measurement vector (SMV) problem. Based on the basis pursuit de-noising approximate message passing (BPDN-AMP) algorithm, a novel learning AMP network (LAMPnet) algorithm is proposed, which is designed to reduce the false alarm probability when the required detection probability is high. Simulation results show that when the required detection probablity is high, the AMP algorithm based on LAMPnet noticeably outperforms the traditional algorithms with acceptable computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography