To see the other types of publications on this topic, follow the link: Joint sparsity structure.

Journal articles on the topic 'Joint sparsity structure'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Joint sparsity structure.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Huang, Junhao, Weize Sun, and Lei Huang. "Joint Structure and Parameter Optimization of Multiobjective Sparse Neural Network." Neural Computation 33, no. 4 (2021): 1113–43. http://dx.doi.org/10.1162/neco_a_01368.

Full text
Abstract:
This work addresses the problem of network pruning and proposes a novel joint training method based on a multiobjective optimization model. Most of the state-of-the-art pruning methods rely on user experience for selecting the sparsity ratio of the weight matrices or tensors, and thus suffer from severe performance reduction with inappropriate user-defined parameters. Moreover, networks might be inferior due to the inefficient connecting architecture search, especially when it is highly sparse. It is revealed in this work that the network model might maintain sparse characteristic in the early stage of the backpropagation (BP) training process, and evolutionary computation-based algorithms can accurately discover the connecting architecture with satisfying network performance. In particular, we establish a multiobjective sparse model for network pruning and propose an efficient approach that combines BP training and two modified multiobjective evolutionary algorithms (MOEAs). The BP algorithm converges quickly, and the two MOEAs can search for the optimal sparse structure and refine the weights, respectively. Experiments are also included to prove the benefits of the proposed algorithm. We show that the proposed method can obtain a desired Pareto front (PF), leading to a better pruning result comparing to the state-of-the-art methods, especially when the network structure is highly sparse.
APA, Harvard, Vancouver, ISO, and other styles
2

Qin, Si, Yimin D. Zhang, Qisong Wu, and Moeness G. Amin. "Structure-Aware Bayesian Compressive Sensing for Near-Field Source Localization Based on Sensor-Angle Distributions." International Journal of Antennas and Propagation 2015 (2015): 1–15. http://dx.doi.org/10.1155/2015/783467.

Full text
Abstract:
A novel technique for localization of narrowband near-field sources is presented. The technique utilizes the sensor-angle distribution (SAD) that treats the source range and direction-of-arrival (DOA) information as sensor-dependent phase progression. The SAD draws parallel to quadratic time-frequency distributions and, as such, is able to reveal the changes in the spatial frequency over sensor positions. For a moderate source range, the SAD signature is of a polynomial shape, thus simplifying the parameter estimation. Both uniform and sparse linear arrays are considered in this work. To exploit the sparsity and continuity of the SAD signature in the joint space and spatial frequency domain, a modified Bayesian compressive sensing algorithm is exploited to estimate the SAD signature. In this method, a spike-and-slab prior is used to statistically encourage sparsity of the SAD across each segmented SAD region, and a patterned prior is imposed to enforce the continuous structure of the SAD. The results are then mapped back to source range and DOA estimation for source localization. The effectiveness of the proposed technique is verified using simulation results with uniform and sparse linear arrays where the array sensors are located on a grid but with consecutive and missing positions.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Meng, Liang Yan, and Qianying Wang. "Group Sparse Regression-Based Learning Model for Real-Time Depth-Based Human Action Prediction." Mathematical Problems in Engineering 2018 (December 24, 2018): 1–7. http://dx.doi.org/10.1155/2018/8201509.

Full text
Abstract:
This paper addresses the problem of predicting human actions in depth videos. Due to the complex spatiotemporal structure of human actions, it is difficult to infer ongoing human actions before they are fully executed. To handle this challenging issue, we first propose two new depth-based features called pairwise relative joint orientations (PRJOs) and depth patch motion maps (DPMMs) to represent the relative movements between each pair of joints and human-object interactions, respectively. The two proposed depth-based features are suitable for recognizing and predicting human actions in real-time fashion. Then, we propose a regression-based learning approach with a group sparsity inducing regularizer to learn action predictor based on the combination of PRJOs and DPMMs for a sparse set of joints. Experimental results on benchmark datasets have demonstrated that our proposed approach significantly outperforms existing methods for real-time human action recognition and prediction from depth data.
APA, Harvard, Vancouver, ISO, and other styles
4

Birdi, Jasleen, Audrey Repetti, and Yves Wiaux. "Sparse interferometric Stokes imaging under the polarization constraint (Polarized SARA)." Monthly Notices of the Royal Astronomical Society 478, no. 4 (July 4, 2018): 4442–63. http://dx.doi.org/10.1093/mnras/sty1182.

Full text
Abstract:
ABSTRACT We develop a novel algorithm for sparse imaging of Stokes parameters in radio interferometry under the polarization constraint. The latter is a physical non-linear relation between the Stokes parameters, imposing the polarization intensity as a lower bound on the total intensity. To solve the joint inverse Stokes imaging problem including this bound, we leverage epigraphical projection techniques in convex optimization and we design a primal–dual method offering a highly flexible and parallelizable structure. In addition, we propose to regularize each Stokes parameter map through an average sparsity prior in the context of a reweighted analysis approach (SARA). The resulting method is dubbed Polarized SARA. Using simulated observations of M87 with the Event Horizon Telescope, we demonstrate that imposing the polarization constraint leads to superior image quality. For the considered data sets, the results also indicate better performance of the average sparsity prior in comparison with the widely used Cotton–Schwab clean algorithm and other total variation based priors for polarimetric imaging. Our matlab code is available online on GitHub.
APA, Harvard, Vancouver, ISO, and other styles
5

Cao, Meng, Wenxing Bao, and Kewen Qu. "Hyperspectral Super-Resolution Via Joint Regularization of Low-Rank Tensor Decomposition." Remote Sensing 13, no. 20 (October 14, 2021): 4116. http://dx.doi.org/10.3390/rs13204116.

Full text
Abstract:
The hyperspectral image super-resolution (HSI-SR) problem aims at reconstructing the high resolution spatial–spectral information of the scene by fusing low-resolution hyperspectral images (LR-HSI) and the corresponding high-resolution multispectral image (HR-MSI). In order to effectively preserve the spatial and spectral structure of hyperspectral images, a new joint regularized low-rank tensor decomposition method (JRLTD) is proposed for HSI-SR. This model alleviates the problem that the traditional HSI-SR method, based on tensor decomposition, fails to adequately take into account the manifold structure of high-dimensional HR-HSI and is sensitive to outliers and noise. The model first operates on the hyperspectral data using the classical Tucker decomposition to transform the hyperspectral data into the form of a three-mode dictionary multiplied by the core tensor, after which the graph regularization and unidirectional total variational (TV) regularization are introduced to constrain the three-mode dictionary. In addition, we impose the l1-norm on core tensor to characterize the sparsity. While effectively preserving the spatial and spectral structures in the fused hyperspectral images, the presence of anomalous noise values in the images is reduced. In this paper, the hyperspectral image super-resolution problem is transformed into a joint regularization optimization problem based on tensor decomposition and solved by a hybrid framework between the alternating direction multiplier method (ADMM) and the proximal alternate optimization (PAO) algorithm. Experimental results conducted on two benchmark datasets and one real dataset show that JRLTD shows superior performance over state-of-the-art hyperspectral super-resolution algorithms.
APA, Harvard, Vancouver, ISO, and other styles
6

Dai, Ling-Yun, Rong Zhu, and Juan Wang. "Joint Nonnegative Matrix Factorization Based on Sparse and Graph Laplacian Regularization for Clustering and Co-Differential Expression Genes Analysis." Complexity 2020 (November 16, 2020): 1–10. http://dx.doi.org/10.1155/2020/3917812.

Full text
Abstract:
The explosion of multiomics data poses new challenges to existing data mining methods. Joint analysis of multiomics data can make the best of the complementary information that is provided by different types of data. Therefore, they can more accurately explore the biological mechanism of diseases. In this article, two forms of joint nonnegative matrix factorization based on the sparse and graph Laplacian regularization (SG-jNMF) method are proposed. In the method, the graph regularization constraint can preserve the local geometric structure of data. L 2,1 -norm regularization can enhance the sparsity among the rows and remove redundant features in the data. First, SG-jNMF1 projects multiomics data into a common subspace and applies the multiomics fusion characteristic matrix to mine the important information closely related to diseases. Second, multiomics data of the same disease are mapped into the common sample space by SG-jNMF2, and the cluster structures are detected clearly. Experimental results show that SG-jNMF can achieve significant improvement in sample clustering compared with existing joint analysis frameworks. SG-jNMF also effectively integrates multiomics data to identify co-differentially expressed genes (Co-DEGs). SG-jNMF provides an efficient integrative analysis method for mining the biological information hidden in heterogeneous multiomics data.
APA, Harvard, Vancouver, ISO, and other styles
7

Abdulaziz, Abdullah, Arwa Dabbech, and Yves Wiaux. "Wideband super-resolution imaging in Radio Interferometry via low rankness and joint average sparsity models (HyperSARA)." Monthly Notices of the Royal Astronomical Society 489, no. 1 (August 5, 2019): 1230–48. http://dx.doi.org/10.1093/mnras/stz2117.

Full text
Abstract:
ABSTRACT We propose a new approach within the versatile framework of convex optimization to solve the radio-interferometric wideband imaging problem. Our approach, dubbed HyperSARA, leverages low rankness, and joint average sparsity priors to enable formation of high-resolution and high-dynamic range image cubes from visibility data. The resulting minimization problem is solved using a primal-dual algorithm. The algorithmic structure is shipped with highly interesting functionalities such as preconditioning for accelerated convergence, and parallelization enabling to spread the computational cost and memory requirements across a multitude of processing nodes with limited resources. In this work, we provide a proof of concept for wideband image reconstruction of megabyte-size images. The better performance of HyperSARA, in terms of resolution and dynamic range of the formed images, compared to single channel imaging and the clean-based wideband imaging algorithm in the wsclean software, is showcased on simulations and Very Large Array observations. Our matlab code is available online on github.
APA, Harvard, Vancouver, ISO, and other styles
8

Tigges, Timo, Janis Sarikas, Michael Klum, and Reinhold Orglmeister. "Compressed sensing of multi-lead ECG signals by compressive multiplexing." Current Directions in Biomedical Engineering 1, no. 1 (September 1, 2015): 65–68. http://dx.doi.org/10.1515/cdbme-2015-0017.

Full text
Abstract:
AbstractCompressed Sensing has recently been proposed for efficient data compression of multi-lead electrocardiogram recordings within ambulatory patient monitoring applications, e.g. wireless body sensor networks. However, current approaches only focus on signal reconstruction and do not consider the efficient compression of signal ensembles. In this work, we propose the utilization of a compressive multiplexing architecture that facilitates an efficient implementation of hardware compressed sensing for multi-lead ECG signals. For the reconstruction of ECG signal ensembles, we employ an greedy algorithm that exploits their joint sparsity structure. Our simulative study shows promising results which motivate further research in the field of compressive multiplexing for the acquisition multi-lead ECG signals.
APA, Harvard, Vancouver, ISO, and other styles
9

Ge, Ting, Tianming Zhan, Qinfeng Li, and Shanxiang Mu. "Optimal Superpixel Kernel-Based Kernel Low-Rank and Sparsity Representation for Brain Tumour Segmentation." Computational Intelligence and Neuroscience 2022 (June 24, 2022): 1–12. http://dx.doi.org/10.1155/2022/3514988.

Full text
Abstract:
Given the need for quantitative measurement and 3D visualisation of brain tumours, more and more attention has been paid to the automatic segmentation of tumour regions from brain tumour magnetic resonance (MR) images. In view of the uneven grey distribution of MR images and the fuzzy boundaries of brain tumours, a representation model based on the joint constraints of kernel low-rank and sparsity (KLRR-SR) is proposed to mine the characteristics and structural prior knowledge of brain tumour image in the spectral kernel space. In addition, the optimal kernel based on superpixel uniform regions and multikernel learning (MKL) is constructed to improve the accuracy of the pairwise similarity measurement of pixels in the kernel space. By introducing the optimal kernel into KLRR-SR, the coefficient matrix can be solved, which allows brain tumour segmentation results to conform with the spatial information of the image. The experimental results demonstrate that the segmentation accuracy of the proposed method is superior to several existing methods under different indicators and that the sparsity constraint for the coefficient matrix in the kernel space, which is integrated into the kernel low-rank model, has certain effects in preserving the local structure and details of brain tumours.
APA, Harvard, Vancouver, ISO, and other styles
10

Ge, Ting, Ning Mu, Tianming Zhan, Zhi Chen, Wanrong Gao, and Shanxiang Mu. "Brain Lesion Segmentation Based on Joint Constraints of Low-Rank Representation and Sparse Representation." Computational Intelligence and Neuroscience 2019 (July 1, 2019): 1–11. http://dx.doi.org/10.1155/2019/9378014.

Full text
Abstract:
The segmentation of brain lesions from a brain magnetic resonance (MR) image is of great significance for the clinical diagnosis and follow-up treatment. An automatic segmentation method for brain lesions is proposed based on the low-rank representation (LRR) and the sparse representation (SR) theory. The proposed method decomposes the brain image into the background part composed of brain tissue and the brain lesion part. Considering that each pixel in the brain tissue can be represented by the background dictionary, a low-rank representation that incorporates sparsity-inducing regularization term is adopted to model the part. Then, the linearized alternating direction method with adaptive penalty (LADMAP) was selected to solve the model, and the brain lesions can be obtained by the response of the residual matrix. The presented model not only reflects the global structure of the image but also preserves the local information of the pixels, thus improving the representation accuracy. The experimental results on the data of brain tumor patients and multiple sclerosis patients revealed that the proposed method is superior to several existing methods in terms of segmentation accuracy while realizing the segmentation automatically.
APA, Harvard, Vancouver, ISO, and other styles
11

Alabideen, Lama Zien, Oumayma Al-Dakkak, and Khaldoun Khorzom. "Reweighted Covariance Fitting Based on Nonconvex Schatten-p Minimization for Gridless Direction of Arrival Estimation." Mathematical Problems in Engineering 2020 (April 27, 2020): 1–11. http://dx.doi.org/10.1155/2020/3012952.

Full text
Abstract:
In this paper, we reformulate the gridless direction of arrival (DoA) estimation problem in a novel reweighted covariance fitting (CF) method. The proposed method promotes joint sparsity among different snapshots by means of nonconvex Schatten-p quasi-norm penalty. Furthermore, for more tractable and scalable optimization problem, we apply the unified surrogate for Schatten-p quasi-norm with two-factor matrix norms. Then, a locally convergent iterative reweighted minimization method is derived and solved efficiently via a semidefinite program using the optimization toolbox. Finally, numerical simulations are carried out in the background of unknown nonuniform noise and under the consideration of coprime array (CPA) structure. The results illustrate the superiority of the proposed method in terms of resolution, robustness against nonuniform noise, and correlations of sources, in addition to its applicability in a limited number of snapshots.
APA, Harvard, Vancouver, ISO, and other styles
12

Miao, Gongxun, Guohua Wu, Zhen Zhang, Yongjie Tong, and Bing Lu. "AddAG-AE: Anomaly Detection in Dynamic Attributed Graph Based on Graph Attention Network and LSTM Autoencoder." Electronics 12, no. 13 (June 21, 2023): 2763. http://dx.doi.org/10.3390/electronics12132763.

Full text
Abstract:
Recently, anomaly detection in dynamic networks has received increased attention due to massive network-structured data arising in many fields, such as network security, intelligent transportation systems, and computational biology. However, many existing methods in this area fail to fully leverage all available information from dynamic networks. Additionally, most of these methods are supervised or semi-supervised algorithms that require labeled data, which may not always be feasible in real-world scenarios. In this paper, we propose AddAG-AE, a general dynamic graph anomaly-detection framework that can fuse node attributes and spatiotemporal information to detect anomalies in an unsupervised manner. The framework consists of two main components. The first component is a feature extractor composed of a dual autoencoder, which captures a joint representation of both the network structure and node attributes in a latent space. The second component is an anomaly detector that combines a Long Short-Term Memory AutoEncoder (LSTM-AE) and a predictor, effectively identifying abnormal snapshots among most normal graph snapshots. Compared with baselines, experimental results show that the method proposed has broad applicability and higher robustness on three datasets with different sparsity.
APA, Harvard, Vancouver, ISO, and other styles
13

Miriya Thanthrige, Udaya S. K. P., Peter Jung, and Aydin Sezgin. "Deep Unfolding of Iteratively Reweighted ADMM for Wireless RF Sensing." Sensors 22, no. 8 (April 15, 2022): 3065. http://dx.doi.org/10.3390/s22083065.

Full text
Abstract:
We address the detection of material defects, which are inside a layered material structure using compressive sensing-based multiple-input and multiple-output (MIMO) wireless radar. Here, strong clutter due to the reflection of the layered structure’s surface often makes the detection of the defects challenging. Thus, sophisticated signal separation methods are required for improved defect detection. In many scenarios, the number of defects that we are interested in is limited, and the signaling response of the layered structure can be modeled as a low-rank structure. Therefore, we propose joint rank and sparsity minimization for defect detection. In particular, we propose a non-convex approach based on the iteratively reweighted nuclear and ℓ1-norm (a double-reweighted approach) to obtain a higher accuracy compared to the conventional nuclear norm and ℓ1-norm minimization. To this end, an iterative algorithm is designed to estimate the low-rank and sparse contributions. Further, we propose deep learning-based parameter tuning of the algorithm (i.e., algorithm unfolding) to improve the accuracy and the speed of convergence of the algorithm. Our numerical results show that the proposed approach outperforms the conventional approaches in terms of mean squared errors of the recovered low-rank and sparse components and the speed of convergence.
APA, Harvard, Vancouver, ISO, and other styles
14

Ma, Fei, Feixia Yang, Ziliang Ping, and Wenqin Wang. "Joint Spatial-Spectral Smoothing in a Minimum-Volume Simplex for Hyperspectral Image Super-Resolution." Applied Sciences 10, no. 1 (December 27, 2019): 237. http://dx.doi.org/10.3390/app10010237.

Full text
Abstract:
The limitations of hyperspectral sensors usually lead to coarse spatial resolution of acquired images. A well-known fusion method called coupled non-negative matrix factorization (CNMF) often amounts to an ill-posed inverse problem with poor anti-noise performance. Moreover, from the perspective of matrix decomposition, the matrixing of remotely-sensed cubic data results in the loss of data’s structural information, which causes the performance degradation of reconstructed images. In addition to three-dimensional tensor-based fusion methods, Craig’s minimum-volume belief in hyperspectral unmixing can also be utilized to restore the data structure information for hyperspectral image super-resolution. To address the above difficulties simultaneously, this article incorporates the regularization of joint spatial-spectral smoothing in a minimum-volume simplex, and spatial sparsity—into the original CNMF, to redefine a bi-convex problem. After the convexification of the regularizers, the alternating optimization is utilized to decouple the regularized problem into two convex subproblems, which are then reformulated by separately vectorizing the variables via vector-matrix operators. The alternating direction method of multipliers is employed to split the variables and yield the closed-form solutions. In addition, in order to solve the bottleneck of high computational burden, especially when the size of the problem is large, complexity reduction is conducted to simplify the solutions with constructed matrices and tensor operators. Experimental results illustrate that the proposed algorithm outperforms state-of-the-art fusion methods, which verifies the validity of the new fusion approach in this article.
APA, Harvard, Vancouver, ISO, and other styles
15

Xie, Shicheng, Shun Wang, Chuanming Song, and Xianghai Wang. "Hyperspectral Image Reconstruction Based on Spatial-Spectral Domains Low-Rank Sparse Representation." Remote Sensing 14, no. 17 (August 25, 2022): 4184. http://dx.doi.org/10.3390/rs14174184.

Full text
Abstract:
The enormous amount of data that are generated by hyperspectral remote sensing images (HSI) combined with the spatial channel’s limited and fragile bandwidth creates serious transmission, storage, and application challenges. HSI reconstruction based on compressed sensing has become a frontier area, and its effectiveness depends heavily on the exploitation and sparse representation of HSI information correlation. In this paper, we propose a low-rank sparse constrained HSI reconstruction model (LRCoSM) that is based on joint spatial-spectral HSI sparseness. In the spectral dimension, a spectral domain sparsity measure and the representation of the joint spectral dimensional plane are proposed for the first time. A Gaussian mixture model (GMM) that is based on unsupervised adaptive parameter learning of external datasets is used to cluster similar patches of joint spectral plane features, capturing the correlation of HSI spectral dimensional non-local structure image patches while performing low-rank decomposition of clustered similar patches to extract feature information, effectively improving the ability of low-rank approximate sparse representation of spectral dimensional similar patches. In the spatial dimension, local-nonlocal HSI similarity is explored to refine sparse prior constraints. Spectral and spatial dimension sparse constraints improve HSI reconstruction quality. Experimental results that are based on various sampling rates on four publicly available datasets show that the proposed algorithm can obtain high-quality reconstructed PSNR and FSIM values and effectively maintain the spectral curves for few-band datasets compared with six currently popular reconstruction algorithms, and the proposed algorithm has strong robustness and generalization ability at different sampling rates and on other datasets.
APA, Harvard, Vancouver, ISO, and other styles
16

Gao, Xuyang, Yibing Shi, Kai Du, Qi Zhu, and Wei Zhang. "Sparse Blind Deconvolution with Nonconvex Optimization for Ultrasonic NDT Application." Sensors 20, no. 23 (December 4, 2020): 6946. http://dx.doi.org/10.3390/s20236946.

Full text
Abstract:
In the field of ultrasonic nondestructive testing (NDT), robust and accurate detection of defects is a challenging task because of the attenuation and noising of the ultrasonic wave from the structure. For determining the reflection characteristics representing the position and amplitude of ultrasonic detection signals, sparse blind deconvolution methods have been implemented to separate overlapping echoes when the ultrasonic transducer impulse response is unknown. This letter introduces the ℓ1/ℓ2 ratio regularization function to model the deconvolution as a nonconvex optimization problem. The initialization influences the accuracy of estimation and, for this purpose, the alternating direction method of multipliers (ADMM) combined with blind gain calibration is used to find the initial approximation to the real solution, given multiple observations in a joint sparsity case. The proximal alternating linearized minimization (PALM) algorithm is embedded in the iterate solution, in which the majorize-minimize (MM) approach accelerates convergence. Compared with conventional blind deconvolution algorithms, the proposed methods demonstrate the robustness and capability of separating overlapping echoes in the context of synthetic experiments.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Tiebin, Kaiyuan Huang, Min Liu, and Ranran He. "Sparse Space Shift Keying Modulation with Enhanced Constellation Mapping." Sensors 22, no. 15 (August 7, 2022): 5895. http://dx.doi.org/10.3390/s22155895.

Full text
Abstract:
For reducing the switching frequency between the radio frequency (RF) chain and transmit antennas, a class of new sparse space shift keying modulation (SSSK) schemes are presented. This new class is proposed to simplify hardware implementation, through carefully designing the spatial constellation mapping pattern. Specifically, different from traditional space shift keying modulation (SSK), the proposed SSSK scheme utilizes more time slots to construct a joint design of time and spatial domain SSK modulation, while maintaining the special structure of single RF chain. Since part of the multi-dimension constellations of SSSK concentrate the energy in less time slots, the RF-switching frequency is effectively reduced due to the sparsity introduced in the time domain. Furthermore, through theoretical analysis, we obtain the closed-form expression of the bit error probability for the SSSK scheme, and demonstrate that slight performance gain can be achieved compared to traditional SSK with reduced implementation cost. Moreover, we integrate transmit antenna selection (TAS) to achieve considerable performance gain. Finally, simulation results confirm the effectiveness of the proposed SSSK scheme compared to its traditional counterpart.
APA, Harvard, Vancouver, ISO, and other styles
18

Dong, Naghedolfeizi, Aberra, and Zeng. "Spectral–Spatial Discriminant Feature Learning for Hyperspectral Image Classification." Remote Sensing 11, no. 13 (June 29, 2019): 1552. http://dx.doi.org/10.3390/rs11131552.

Full text
Abstract:
Sparse representation classification (SRC) is being widely applied to target detection in hyperspectral images (HSI). However, due to the problem in HSI that high-dimensional data contain redundant information, SRC methods may fail to achieve high classification performance, even with a large number of spectral bands. Selecting a subset of predictive features in a high-dimensional space is an important and challenging problem for hyperspectral image classification. In this paper, we propose a novel discriminant feature learning (DFL) method, which combines spectral and spatial information into a hypergraph Laplacian. First, a subset of discriminative features is selected, which preserve the spectral structure of data and the inter- and intra-class constraints on labeled training samples. A feature evaluator is obtained by semi-supervised learning with the hypergraph Laplacian. Secondly, the selected features are mapped into a further lower-dimensional eigenspace through a generalized eigendecomposition of the Laplacian matrix. The finally extracted discriminative features are used in a joint sparsity-model algorithm. Experiments conducted with benchmark data sets and different experimental settings show that our proposed method increases classification accuracy and outperforms the state-of-the-art HSI classification methods.
APA, Harvard, Vancouver, ISO, and other styles
19

Emmert-Streib, Frank, and Matthias Dehmer. "Large-Scale Simultaneous Inference with Hypothesis Testing: Multiple Testing Procedures in Practice." Machine Learning and Knowledge Extraction 1, no. 2 (May 15, 2019): 653–83. http://dx.doi.org/10.3390/make1020039.

Full text
Abstract:
A statistical hypothesis test is one of the most eminent methods in statistics. Its pivotal role comes from the wide range of practical problems it can be applied to and the sparsity of data requirements. Being an unsupervised method makes it very flexible in adapting to real-world situations. The availability of high-dimensional data makes it necessary to apply such statistical hypothesis tests simultaneously to the test statistics of the underlying covariates. However, if applied without correction this leads to an inevitable increase in Type 1 errors. To counteract this effect, multiple testing procedures have been introduced to control various types of errors, most notably the Type 1 error. In this paper, we review modern multiple testing procedures for controlling either the family-wise error (FWER) or the false-discovery rate (FDR). We emphasize their principal approach allowing categorization of them as (1) single-step vs. stepwise approaches, (2) adaptive vs. non-adaptive approaches, and (3) marginal vs. joint multiple testing procedures. We place a particular focus on procedures that can deal with data with a (strong) correlation structure because real-world data are rarely uncorrelated. Furthermore, we also provide background information making the often technically intricate methods accessible for interdisciplinary data scientists.
APA, Harvard, Vancouver, ISO, and other styles
20

Oghenekohwo, Felix, Haneet Wason, Ernie Esser, and Felix J. Herrmann. "Low-cost time-lapse seismic with distributed compressive sensing — Part 1: Exploiting common information among the vintages." GEOPHYSICS 82, no. 3 (May 1, 2017): P1—P13. http://dx.doi.org/10.1190/geo2016-0076.1.

Full text
Abstract:
Time-lapse seismic is a powerful technology for monitoring a variety of subsurface changes due to reservoir fluid flow. However, the practice can be technically challenging when one seeks to acquire colocated time-lapse surveys with high degrees of replicability among the shot locations. We have determined that under “ideal” circumstances, in which we ignore errors related to taking measurements off the grid, high-quality prestack data can be obtained from randomized subsampled measurements that are observed from surveys in which we choose not to revisit the same randomly subsampled on-the-grid shot locations. Our acquisition is low cost because our measurements are subsampled. We have found that the recovered finely sampled prestack baseline and monitor data actually improve significantly when the same on-the-grid shot locations are not revisited. We achieve this result by using the fact that different time-lapse data share information and that nonreplicated (on-the-grid) acquisitions can add information when prestack data are recovered jointly. Whenever the time-lapse data exhibit joint structure — i.e., they are compressible in some transform domain and share information — sparsity-promoting recovery of the “common component” and “innovations,” with respect to this common component, outperforms independent recovery of the prestack baseline and monitor data. The recovered time-lapse data are of high enough quality to serve as the input to extract poststack attributes used to compute time-lapse differences. Without joint recovery, artifacts — due to the randomized subsampling — lead to deterioration of the degree of repeatability of the time-lapse data. We tested this method by carrying out experiments with reliable statistics from thousands of repeated experiments. We also confirmed that high degrees of repeatability are achievable for an ocean-bottom cable survey acquired with time-jittered continuous recording.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Guojing, Wei Ye, Guochao Lao, Shuya Kong, and Di Yan. "Narrowband Interference Separation for Synthetic Aperture Radar via Sensing Matrix Optimization-Based Block Sparse Bayesian Learning." Electronics 8, no. 4 (April 25, 2019): 458. http://dx.doi.org/10.3390/electronics8040458.

Full text
Abstract:
High-resolution synthetic aperture radar (SAR) operating with a large bandwidth is subject to impacts from various kinds of narrowband interference (NBI) in complex electromagnetic environments. Recently, many radio frequency interference (RFI) suppression approaches for SAR based on sparse recovery have been proposed and demonstrated to outperform traditional ones in preserving the signal of interest (SOI) while suppressing the interference by exploiting their intrinsic structures. In particular, the joint recovery strategy of SOI and NBI with a cascaded dictionary, which eliminates the steps of NBI reconstruction and time-domain cancellation, can further reduce unnecessary system complexity. However, these sparsity-based approaches hardly work effectively for signals from an extended target or NBI with a certain bandwidth, since neither of them is sparse in a prescient domain. Moreover, sub-dictionaries corresponding to different components in the cascaded matrix are not strictly independent, which severely limits the performance of separated reconstruction. In this paper, we present an enhanced NBI separation algorithm for SAR via sensing matrix optimization-based block sparse Bayesian learning (SMO-BSBL) to solve these problems above. First, we extend the block sparse Bayesian learning framework to a complex-valued domain for the convenience of radar signal processing with lower computation complexity and modify it to deal with the separation problem of NBI in the contaminated echo. For the sake of improving the separated reconstruction performance, we propose a new block coherence measure by defining the external and internal block structure, which is used for optimizing the observation matrix. The optimized observation matrix is then employed to reconstruct SOI and NBI simultaneously under the modified BSBL framework, given a known and fixed cascaded dictionary. Numerical simulation experiments and comparison results demonstrate that the proposed SMO-BSBL is effective and superior to other advanced algorithms in NBI suppression for SAR.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Zheng, Xiaofeng Zhu, Guangming Lu, and Yudong Zhang. "Probability Ordinal-Preserving Semantic Hashing for Large-Scale Image Retrieval." ACM Transactions on Knowledge Discovery from Data 15, no. 3 (April 12, 2021): 1–22. http://dx.doi.org/10.1145/3442204.

Full text
Abstract:
Semantic hashing enables computation and memory-efficient image retrieval through learning similarity-preserving binary representations. Most existing hashing methods mainly focus on preserving the piecewise class information or pairwise correlations of samples into the learned binary codes while failing to capture the mutual triplet-level ordinal structure in similarity preservation. In this article, we propose a novel Probability Ordinal-preserving Semantic Hashing (POSH) framework, which for the first time defines the ordinal-preserving hashing concept under a non-parametric Bayesian theory. Specifically, we derive the whole learning framework of the ordinal similarity-preserving hashing based on the maximum posteriori estimation, where the probabilistic ordinal similarity preservation, probabilistic quantization function, and probabilistic semantic-preserving function are jointly considered into one unified learning framework. In particular, the proposed triplet-ordering correlation preservation scheme can effectively improve the interpretation of the learned hash codes under an economical anchor-induced asymmetric graph learning model. Moreover, the sparsity-guided selective quantization function is designed to minimize the loss of space transformation, and the regressive semantic function is explored to promote the flexibility of the formulated semantics in hash code learning. The final joint learning objective is formulated to concurrently preserve the ordinal locality of original data and explore potentials of semantics for producing discriminative hash codes. Importantly, an efficient alternating optimization algorithm with the strictly proof convergence guarantee is developed to solve the resulting objective problem. Extensive experiments on several large-scale datasets validate the superiority of the proposed method against state-of-the-art hashing-based retrieval methods.
APA, Harvard, Vancouver, ISO, and other styles
23

Bai, Zijian, Yinfeng Huang, Suzhi Zhang, Pu Li, Yuanyuan Chang, and Xiang Lin. "Multi-Level Knowledge-Aware Contrastive Learning Network for Personalized Recipe Recommendation." Applied Sciences 12, no. 24 (December 14, 2022): 12863. http://dx.doi.org/10.3390/app122412863.

Full text
Abstract:
Personalized recipe recommendation is attracting more and more attention, which can help people make choices from the exploding growth of online food information. Unlike other recommendation tasks, the target of recipe recommendation is a non-atomic item, so attribute information is especially important for the representation of recipes. However, traditional collaborative filtering or content-based recipe recommendation methods tend to focus more on user–recipe interaction information and ignore higher-order semantic and structural information. Recently, graph neural networks (GNNs)-based recommendation methods provided new ideas for recipe recommendation, but there was a problem of sparsity of supervised signals caused by the long-tailed distribution of heterogeneous graph entities. How to construct high-quality representations of users and recipes becomes a new challenge for personalized recipe recommendation. In this paper, we propose a new method, a multi-level knowledge-aware contrastive learning network (MKCLN) for personalized recipe recommendation. Compared with traditional comparative learning, we design a multi-level view to satisfy the requirement of fine-grained representation of users and recipes, and use multiple knowledge-aware aggregation methods for node fusion to finally make recommendations. Specifically, the local-level includes two views, interaction view and semantic view, which mine collaborative information and semantic information for high-quality representation of nodes. The global-level learns node embedding by capturing higher-order structural information and semantic information through a network structure view. Then, a kind of self-supervised cross-view contrastive learning is invoked to make the information of multiple views collaboratively supervise each other to learn fine-grained node embeddings. Finally, the recipes that satisfy personalized preferences are recommended to users by joint training and model prediction functions. In this study, we conduct experiments on two real recipe datasets, and the experimental results demonstrate the effectiveness and advancement of MKCLN.
APA, Harvard, Vancouver, ISO, and other styles
24

Haichao Zhang, Yanning Zhang, N. M. Nasrabadi, and T. S. Huang. "Joint-Structured-Sparsity-Based Classification for Multiple-Measurement Transient Acoustic Signals." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 42, no. 6 (December 2012): 1586–98. http://dx.doi.org/10.1109/tsmcb.2012.2196038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

KJELDAL, INGER. "Irreducible Compound Dorsal Dislocations of the Proximal Interphalangeal Joint of the Finger." Journal of Hand Surgery 11, no. 1 (February 1986): 49–50. http://dx.doi.org/10.1016/0266-7681_86_90011-2.

Full text
Abstract:
Three cases of compound irreducible dorsal dislocation of the proximal interphalangeal joint of the finger are reported. The probable mechanism is discussed. The findings warrant the description “volar capsular boutonnière” as the condyles of the proximal phalanx buttonhole, through the volar structures. Open reduction combined with debridement, is the treatment for such compound irreducible dorsal dislocations. Dislocation of the proximal interphalangeal joints of the fingers are common and can usually be reduced by simple traction. Occasionally reduction by closed methods is unsuccessful because of interposition of volar or dorsal soft tissue structures (Lamb 1981). This study reports three cases of compound dorsal dislocation of the proximal interphalangeal joint with volar soft tissue interposition. Such lesions are sparsely mentioned in text books on fractures and hand injuries and hitherto only a few cases have been published (Lamb 1981, Bunnell 1956).
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Chi, Shuanghui Zhang, Yongxiang Liu, and Xiang Li. "Joint Structured Sparsity and Least Entropy Constrained Sparse Aperture Radar Imaging and Autofocusing." IEEE Transactions on Geoscience and Remote Sensing 58, no. 9 (September 2020): 6580–93. http://dx.doi.org/10.1109/tgrs.2020.2978096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Xie, Liping, Dacheng Tao, and Haikun Wei. "Joint Structured Sparsity Regularized Multiview Dimension Reduction for Video-Based Facial Expression Recognition." ACM Transactions on Intelligent Systems and Technology 8, no. 2 (January 18, 2017): 1–21. http://dx.doi.org/10.1145/2956556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Djelouat, Hamza, Hamza Baali, Abbes Amira, and Faycal Bensaali. "An Adaptive Joint Sparsity Recovery for Compressive Sensing Based EEG System." Wireless Communications and Mobile Computing 2017 (2017): 1–10. http://dx.doi.org/10.1155/2017/9823684.

Full text
Abstract:
The last decade has witnessed tremendous efforts to shape the Internet of things (IoT) platforms to be well suited for healthcare applications. These platforms are comprised of a network of wireless sensors to monitor several physical and physiological quantities. For instance, long-term monitoring of brain activities using wearable electroencephalogram (EEG) sensors is widely exploited in the clinical diagnosis of epileptic seizures and sleeping disorders. However, the deployment of such platforms is challenged by the high power consumption and system complexity. Energy efficiency can be achieved by exploring efficient compression techniques such as compressive sensing (CS). CS is an emerging theory that enables a compressed acquisition using well-designed sensing matrices. Moreover, system complexity can be optimized by using hardware friendly structured sensing matrices. This paper quantifies the performance of a CS-based multichannel EEG monitoring. In addition, the paper exploits the joint sparsity of multichannel EEG using subspace pursuit (SP) algorithm as well as a designed sparsifying basis in order to improve the reconstruction quality. Furthermore, the paper proposes a modification to the SP algorithm based on an adaptive selection approach to further improve the performance in terms of reconstruction quality, execution time, and the robustness of the recovery process.
APA, Harvard, Vancouver, ISO, and other styles
29

Wu, Xing, Xia Zhang, Nan Wang, and Yi Cen. "Joint Sparse and Low-Rank Multi-Task Learning with Extended Multi-Attribute Profile for Hyperspectral Target Detection." Remote Sensing 11, no. 2 (January 15, 2019): 150. http://dx.doi.org/10.3390/rs11020150.

Full text
Abstract:
Target detection is an active area in hyperspectral imagery (HSI) processing. Many algorithms have been proposed for the past decades. However, the conventional detectors mainly benefit from the spectral information without fully exploiting the spatial structures of HSI. Besides, they primarily use all bands information and ignore the inter-band redundancy. Moreover, they do not make full use of the difference between the background and target samples. To alleviate these problems, we proposed a novel joint sparse and low-rank multi-task learning (MTL) with extended multi-attribute profile (EMAP) algorithm (MTJSLR-EMAP). Briefly, the spatial features of HSI were first extracted by morphological attribute filters. Then the MTL was exploited to reduce band redundancy and retain the discriminative information simultaneously. Considering the distribution difference between the background and target samples, the target and background pixels were separately modeled with different regularization terms. In each task, a background pixel can be low-rank represented by the background samples while a target pixel can be sparsely represented by the target samples. Finally, the proposed algorithm was compared with six detectors including constrained energy minimization (CEM), adaptive coherence estimator (ACE), hierarchical CEM (hCEM), sparsity-based detector (STD), joint sparse representation and MTL detector (JSR-MTL), independent encoding JSR-MTL (IEJSR-MTL) on three datasets. Corresponding to each competitor, it has the average detection performance improvement of about 19.94%, 22.53%, 16.92%, 14.87%, 14.73%, 4.21% respectively. Extensive experimental results demonstrated that MTJSLR-EMAP outperforms several state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
30

Chang, Yan-Shuo, Feiping Nie, and Ming-Yu Wang. "Multiview Feature Analysis via Structured Sparsity and Shared Subspace Discovery." Neural Computation 29, no. 7 (July 2017): 1986–2003. http://dx.doi.org/10.1162/neco_a_00977.

Full text
Abstract:
Since combining features from heterogeneous data sources can significantly boost classification performance in many applications, it has attracted much research attention over the past few years. Most of the existing multiview feature analysis approaches separately learn features in each view, ignoring knowledge shared by multiple views. Different views of features may have some intrinsic correlations that might be beneficial to feature learning. Therefore, it is assumed that multiviews share subspaces from which common knowledge can be discovered. In this letter, we propose a new multiview feature learning algorithm, aiming to exploit common features shared by different views. To achieve this goal, we propose a feature learning algorithm in a batch mode, by which the correlations among different views are taken into account. Multiple transformation matrices for different views are simultaneously learned in a joint framework. In this way, our algorithm can exploit potential correlations among views as supplementary information that further improves the performance result. Since the proposed objective function is nonsmooth and difficult to solve directly, we propose an iterative algorithm for effective optimization. Extensive experiments have been conducted on a number of real-world data sets. Experimental results demonstrate superior performance in terms of classification against all the compared approaches. Also, the convergence guarantee has been validated in the experiment.
APA, Harvard, Vancouver, ISO, and other styles
31

Fan, Bo, Xiaoli Zhou, Shuo Chen, Zhijie Jiang, and Yongqiang Cheng. "Sparse Bayesian Perspective for Radar Coincidence Imaging with Model Errors." Mathematical Problems in Engineering 2020 (April 21, 2020): 1–12. http://dx.doi.org/10.1155/2020/9202654.

Full text
Abstract:
Sparsity-driven methods are commonly applied to reconstruct targets in radar coincidence imaging (RCI), where the reference matrix needs to be computed precisely and the prior knowledge of the accurate imaging model is essential. Unfortunately, the existence of model errors in practical RCI applications is common, which defocuses the reconstructed image considerably. Accordingly, this paper aims to formulate a unified framework for sparsity-driven RCI with model errors based on the sparse Bayesian approach. Firstly, a parametric joint sparse reconstruction model is built to describe the RCI when perturbed by model errors. The structured sparse Bayesian prior is then assigned to this model, after which the structured sparse Bayesian autofocus (SSBA) algorithm is proposed in the variational Bayesian expectation maximization (VBEM) framework; this solution jointly realizes sparse imaging and model error calibration. Simulation results demonstrate that the proposed algorithm can both calibrate the model errors and obtain a well-focused target image with high reconstruction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
32

Xu, Lei, Chuancheng Song, and Lei Chen. "Tri-Structured-Sparsity Induced Joint Feature Selection and Classification for Hybrid Noise Resilient Multilabel Learning." IEEE Access 8 (2020): 108270–80. http://dx.doi.org/10.1109/access.2020.3001274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Srinivas, Umamahesh, Yi Chen, Vishal Monga, Nasser Nasrabadi, and Trac Tran. "Exploiting Sparsity in Hyperspectral Image Classification via Graphical Models." Geoscience and Remote Sensing Letters, IEEE 10, no. 3 (November 2012): 505–9. http://dx.doi.org/10.1109/lgrs.2012.2211858.

Full text
Abstract:
A significant recent advance in hyperspectral image (HSI) classification relies on the observation that the spectral signature of a pixel can be represented by a sparse linear combination of training spectra from an overcomplete dictionary. A spatiospectral notion of sparsity is further captured by developing a joint sparsity model, wherein spectral signatures of pixels in a local spatial neighborhood (of the pixel of interest) are constrained to be represented by a common collection of training spectra, albeit with different weights. A challenging open problem is to effectively capture the class conditional correlations between these multiple sparse representations corresponding to different pixels in the spatial neighborhood. We propose a probabilistic graphical model framework to explicitly mine the conditional dependences between these distinct sparse features. Our graphical models are synthesized using simple tree structures which can be discriminatively learnt (even with limited training samples) for classification. Experiments on benchmark HSI data sets reveal significant improvements over existing approaches in classification rates as well as robustness to choice of training.
APA, Harvard, Vancouver, ISO, and other styles
34

Fannjiang, Albert. "TV-min and greedy pursuit for constrained joint sparsity and application to inverse scattering." Mathematics and Mechanics of Complex Systems 1, no. 1 (February 6, 2013): 81–104. http://dx.doi.org/10.2140/memocs.2013.1.81.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Hu, Yaping, Liying Liu, and Yujie Wang. "Alternating Direction Multiplier Method for Matrix l2,1-Norm Optimization in Multitask Feature Learning Problems." Mathematical Problems in Engineering 2020 (August 26, 2020): 1–7. http://dx.doi.org/10.1155/2020/4864296.

Full text
Abstract:
The joint feature selection problem can be resolved by solving a matrix l2,1-norm minimization problem. For l2,1-norm regularization, one of the most fascinating features is that some similar sparsity structures can be employed by multiple predictors. However, the nonsmooth nature of the problem brings great challenges to the problem. In this paper, an alternating direction multiplier method combined with the spectral gradient method is proposed for solving the matrix l2,1-norm optimization problem involved with multitask feature learning. Numerical experiments show the effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
36

Green, Spence, Marie-Catherine de Marneffe, and Christopher D. Manning. "Parsing Models for Identifying Multiword Expressions." Computational Linguistics 39, no. 1 (March 2013): 195–227. http://dx.doi.org/10.1162/coli_a_00139.

Full text
Abstract:
Multiword expressions lie at the syntax/semantics interface and have motivated alternative theories of syntax like Construction Grammar. Until now, however, syntactic analysis and multiword expression identification have been modeled separately in natural language processing. We develop two structured prediction models for joint parsing and multiword expression identification. The first is based on context-free grammars and the second uses tree substitution grammars, a formalism that can store larger syntactic fragments. Our experiments show that both models can identify multiword expressions with much higher accuracy than a state-of-the-art system based on word co-occurrence statistics. We experiment with Arabic and French, which both have pervasive multiword expressions. Relative to English, they also have richer morphology, which induces lexical sparsity in finite corpora. To combat this sparsity, we develop a simple factored lexical representation for the context-free parsing model. Morphological analyses are automatically transformed into rich feature tags that are scored jointly with lexical items. This technique, which we call a factored lexicon, improves both standard parsing and multiword expression identification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhang, Xinyu, Ian Colbert, and Srinjoy Das. "Learning Low-Precision Structured Subnetworks Using Joint Layerwise Channel Pruning and Uniform Quantization." Applied Sciences 12, no. 15 (August 4, 2022): 7829. http://dx.doi.org/10.3390/app12157829.

Full text
Abstract:
Pruning and quantization are core techniques used to reduce the inference costs of deep neural networks. Among the state-of-the-art pruning techniques, magnitude-based pruning algorithms have demonstrated consistent success in the reduction of both weight and feature map complexity. However, we find that existing measures of neuron (or channel) importance estimation used for such pruning procedures have at least one of two limitations: (1) failure to consider the interdependence between successive layers; and/or (2) performing the estimation in a parametric setting or by using distributional assumptions on the feature maps. In this work, we demonstrate that the importance rankings of the output neurons of a given layer strongly depend on the sparsity level of the preceding layer, and therefore, naïvely estimating neuron importance to drive magnitude-based pruning will lead to suboptimal performance. Informed by this observation, we propose a purely data-driven nonparametric, magnitude-based channel pruning strategy that works in a greedy manner based on the activations of the previous sparsified layer. We demonstrate that our proposed method works effectively in combination with statistics-based quantization techniques to generate low precision structured subnetworks that can be efficiently accelerated by hardware platforms such as GPUs and FPGAs. Using our proposed algorithms, we demonstrate increased performance per memory footprint over existing solutions across a range of discriminative and generative networks.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Wenjie, Hui Li, Rong Jin, Shanlin Wei, Wei Cheng, Weisi Kong, and Penglu Liu. "Distributed Structured Compressive Sensing-Based Time-Frequency Joint Channel Estimation for Massive MIMO-OFDM Systems." Mobile Information Systems 2019 (May 2, 2019): 1–16. http://dx.doi.org/10.1155/2019/2634361.

Full text
Abstract:
In massive multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) systems, accurate channel state information (CSI) is essential to realize system performance gains such as high spectrum and energy efficiency. However, high-dimensional CSI acquisition requires prohibitively high pilot overhead, which leads to a significant reduction in spectrum efficiency and energy efficiency. In this paper, we propose a more efficient time-frequency joint channel estimation scheme for massive MIMO-OFDM systems to resolve those problems. First, partial channel common support (PCCS) is obtained by using time-domain training. Second, utilizing the spatiotemporal common sparse property of the MIMO channels and the obtained PCCS information, we propose the priori-information aided distributed structured sparsity adaptive matching pursuit (PA-DS-SAMP) algorithm to achieve accurate channel estimation in frequency domain. Third, through performance analysis of the proposed algorithm, two signal power reference thresholds are given, which can ensure that the signal can be recovered accurately under power-limited noise and accurately recovered according to probability under Gaussian noise. Finally, pilot design, computational complexity, spectrum efficiency, and energy efficiency are discussed as well. Simulation results show that the proposed method achieves higher channel estimation accuracy while requiring lower pilot sequence overhead compared with other methods.
APA, Harvard, Vancouver, ISO, and other styles
39

Lu, Chenxiang, Xiangyang Zeng, Qiang Wang, Lu Wang, and Anqi Jin. "Array-Based Underwater Acoustic Target Classification with Spectrum Reconstruction Based on Joint Sparsity and Frequency Shift Invariant Feature." Journal of Marine Science and Engineering 11, no. 6 (May 23, 2023): 1101. http://dx.doi.org/10.3390/jmse11061101.

Full text
Abstract:
The target spectrum, which is commonly used in feature extraction for underwater acoustic target classification, can be improperly recovered via conventional beamformer (CBF) owing to its frequency-variant spatial response and lead to degraded classification performance. In this paper, we propose a target spectrum reconstruction method under a sparse Bayesian learning framework with joint sparsity priors that can not only achieve high-resolution target separation in the angular domain but also attain beamwidth constancy over a frequency range at no cost of reducing angular resolution. Experiments on real measured array data show the recovered spectrum via our proposed method can effectively suppress interference and preserve more detailed spectral structures than CBF. This indicates our method is more suitable for target classification because it has the capability of retaining more representative and discriminative characteristics. Moreover, due to target motion and the underwater channel effect, the frequency of prominent spectral line components can be shifted over time, which is harmful to classification performance. To overcome this problem, we proposed a frequency shift-invariant feature extraction method with the help of elaborately designed frequency shift-invariant filter banks. The classification experiments demonstrate that our proposed methods outperform traditional CBF and Mel-frequency features and can help improve underwater recognition performance.
APA, Harvard, Vancouver, ISO, and other styles
40

Ding, Yang, and Leyuan Zhou. "Inner-Cluster-Structure Reconstruction Based Transfer Fuzzy Clustering and MRI Segmentation Applications." Journal of Medical Imaging and Health Informatics 10, no. 7 (July 1, 2020): 1575–83. http://dx.doi.org/10.1166/jmihi.2020.3052.

Full text
Abstract:
MRI automatically segmentation is very useful in clinical diagnosis. However, in some cases, MR images are contaminated by noises or lose pixels and then become sparse such that automatically segmentation by classical algorithms becomes difficult or impossible. In this study, we propose a transfer fuzzy clustering algorithm based on inner-cluster-structure reconstruction. Firstly, we use all objects to represent the inner cluster structure by assigning weights to all object. Secondly, in order to reconstruct the cluster structure in the target domain in which objects distribute sparsely or are contaminated by noises, we joint the two domains together and recalculate the weights of all objects in the target domain. Thirdly, the updated weights in the target domain are considered as transfer knowledge that is used for guiding the target domain learning. Experimental results on MR images and synthetic datasets indicate our novel algorithm achieves the best performance comparing with other similar algorithms.
APA, Harvard, Vancouver, ISO, and other styles
41

Han, Song, Xiaoping Liu, Xing Han, Gang Wang, and Shaobo Wu. "Visual Sorting of Express Parcels Based on Multi-Task Deep Learning." Sensors 20, no. 23 (November 27, 2020): 6785. http://dx.doi.org/10.3390/s20236785.

Full text
Abstract:
Visual sorting of express parcels in complex scenes has always been a key issue in intelligent logistics sorting systems. With existing methods, it is still difficult to achieve fast and accurate sorting of disorderly stacked parcels. In order to achieve accurate detection and efficient sorting of disorderly stacked express parcels, we propose a robot sorting method based on multi-task deep learning. Firstly, a lightweight object detection network model is proposed to improve the real-time performance of the system. A scale variable and the joint weights of the network are used to sparsify the model and automatically identify unimportant channels. Pruning strategies are used to reduce the model size and increase the speed of detection without losing accuracy. Then, an optimal sorting position and pose estimation network model based on multi-task deep learning is proposed. Using an end-to-end network structure, the optimal sorting positions and poses of express parcels are estimated in real time by combining pose and position information for joint training. It is proved that this model can further improve the sorting accuracy. Finally, the accuracy and real-time performance of this method are verified by robotic sorting experiments.
APA, Harvard, Vancouver, ISO, and other styles
42

Caughey, Devin, and Mallory Wang. "Dynamic Ecological Inference for Time-Varying Population Distributions Based on Sparse, Irregular, and Noisy Marginal Data." Political Analysis 27, no. 3 (April 11, 2019): 388–96. http://dx.doi.org/10.1017/pan.2019.4.

Full text
Abstract:
Social scientists are frequently interested in how populations evolve over time. Creating poststratification weights for surveys, for example, requires information on the weighting variables’ joint distribution in the target population. Typically, however, population data are sparsely available across time periods. Even when population data are observed, the content and structure of the data—which variables are observed and whether their marginal or joint distributions are known—differ across time, in ways that preclude straightforward interpolation. As a consequence, survey weights are often based only on the small subset of auxiliary variables whose joint population distribution is observed regularly over time, and thus fail to take full advantage of auxiliary information. To address this problem, we develop a dynamic Bayesian ecological inference model for estimating multivariate categorical distributions from sparse, irregular, and noisy data on their marginal (or partially joint) distributions. Our approach combines (1) a Dirichlet sampling model for the observed margins conditional on the unobserved cell proportions; (2) a set of equations encoding the logical relationships among different population quantities; and (3) a Dirichlet transition model for the period-specific proportions that pools information across time periods. We illustrate this method by estimating annual U.S. phone-ownership rates by race and region based on population data irregularly available between 1930 and 1960. This approach may be useful in a wide variety of contexts where scholars wish to make dynamic ecological inferences about interior cells from marginal data. A new R package estsubpop implements the method.
APA, Harvard, Vancouver, ISO, and other styles
43

Grigorenko, S. G., T. G. Taranova, V. A. Kostin, T. G. Solomijchuk, V. Yu Bilous, and E. L. Vrzhizhevskyi. "Influence of heat treatment on the structure and fracture mode of welded joints of sparsely-alloyed titanium alloy." Sovremennaâ èlektrometallurgiâ 2021, no. 3 (September 28, 2021): 42–48. http://dx.doi.org/10.37434/sem2021.03.07.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Vitali, F., S. Marini, D. Pala, A. Demartini, S. Montoli, A. Zambelli, and R. Bellazzi. "Patient similarity by joint matrix trifactorization to identify subgroups in acute myeloid leukemia." JAMIA Open 1, no. 1 (May 14, 2018): 75–86. http://dx.doi.org/10.1093/jamiaopen/ooy008.

Full text
Abstract:
Abstract Objective Computing patients’ similarity is of great interest in precision oncology since it supports clustering and subgroup identification, eventually leading to tailored therapies. The availability of large amounts of biomedical data, characterized by large feature sets and sparse content, motivates the development of new methods to compute patient similarities able to fuse heterogeneous data sources with the available knowledge. Materials and Methods In this work, we developed a data integration approach based on matrix trifactorization to compute patient similarities by integrating several sources of data and knowledge. We assess the accuracy of the proposed method: (1) on several synthetic data sets which similarity structures are affected by increasing levels of noise and data sparsity, and (2) on a real data set coming from an acute myeloid leukemia (AML) study. The results obtained are finally compared with the ones of traditional similarity calculation methods. Results In the analysis of the synthetic data set, where the ground truth is known, we measured the capability of reconstructing the correct clusters, while in the AML study we evaluated the Kaplan-Meier curves obtained with the different clusters and measured their statistical difference by means of the log-rank test. In presence of noise and sparse data, our data integration method outperform other techniques, both in the synthetic and in the AML data. Discussion In case of multiple heterogeneous data sources, a matrix trifactorization technique can successfully fuse all the information in a joint model. We demonstrated how this approach can be efficiently applied to discover meaningful patient similarities and therefore may be considered a reliable data driven strategy for the definition of new research hypothesis for precision oncology. Conclusion The better performance of the proposed approach presents an advantage over previous methods to provide accurate patient similarities supporting precision medicine.
APA, Harvard, Vancouver, ISO, and other styles
45

Song, Jiagang, Jiayu Song, Xinpan Yuan, Xiao He, and Xinghui Zhu. "Graph Representation-Based Deep Multi-View Semantic Similarity Learning Model for Recommendation." Future Internet 14, no. 2 (January 19, 2022): 32. http://dx.doi.org/10.3390/fi14020032.

Full text
Abstract:
With the rapid development of Internet technology, how to mine and analyze massive amounts of network information to provide users with accurate and fast recommendation information has become a hot and difficult topic of joint research in industry and academia in recent years. One of the most widely used social network recommendation methods is collaborative filtering. However, traditional social network-based collaborative filtering algorithms will encounter problems such as low recommendation performance and cold start due to high data sparsity and uneven distribution. In addition, these collaborative filtering algorithms do not effectively consider the implicit trust relationship between users. To this end, this paper proposes a collaborative filtering recommendation algorithm based on graphsage (GraphSAGE-CF). The algorithm first uses graphsage to learn low-dimensional feature representations of the global and local structures of user nodes in social networks and then calculates the implicit trust relationship between users through the feature representations learned by graphsage. Finally, the comprehensive evaluation shows the scores of users and implicit users on related items and predicts the scores of users on target items. Experimental results on four open standard datasets show that our proposed graphsage-cf algorithm is superior to existing algorithms in RMSE and MAE.
APA, Harvard, Vancouver, ISO, and other styles
46

Song, Min, Minhyuk Lee, Taesung Park, and Mira Park. "MP-LASSO chart: a multi-level polar chart for visualizing group LASSO analysis of genomic data." Genomics & Informatics 20, no. 4 (December 31, 2022): e48. http://dx.doi.org/10.5808/gi.22075.

Full text
Abstract:
Penalized regression has been widely used in genome-wide association studies for joint analyses to find genetic associations. Among penalized regression models, the least absolute shrinkage and selection operator (LASSO) method effectively removes some coefficients from the model by shrinking them toward zero. To handle group structures, such as genes and pathways, several modified LASSO penalties have been proposed, including group LASSO and sparse group LASSO. Group LASSO ensures sparsity at the level of pre-defined groups, eliminating unimportant groups. Sparse group LASSO performs group selection as in group LASSO, but also performs individual selection as in LASSO. While these sparse methods are useful in high-dimensional genetic studies, interpreting the results with many groups and coefficients is not straightforward. LASSO's results are often expressed as trace plots of regression coefficients. However, few studies have explored the systematic visualization of group information. In this study, we propose a multi-level polar LASSO (MP-LASSO) chart, which can effectively represent the results from group LASSO and sparse group LASSO analyses. An R package to draw MP-LASSO charts was developed. Through a real-world genetic data application, we demonstrated that our MP-LASSO chart package effectively visualizes the results of LASSO, group LASSO, and sparse group LASSO.
APA, Harvard, Vancouver, ISO, and other styles
47

Hu, Xueru, Lanyue Zhang, Di Wu, and Jia Wang. "Underwater Acoustic Channel Estimation via an Orthogonal Matching Pursuit Algorithm Based on the Modified Phase-Transform-Weighted Function." Journal of Marine Science and Engineering 11, no. 7 (July 11, 2023): 1397. http://dx.doi.org/10.3390/jmse11071397.

Full text
Abstract:
In the context of torpedo guidance systems, the performance of active sonar in channel parameter estimation and target detection and recognition is significantly degraded by the multipath effect and the time-varying characteristics of the underwater acoustic (UWA) channel. Therefore, it is urgent to propose an algorithm that can accurately estimate the channel parameters in multipath time-varying UWA channels. To solve these problems, this study developed a modified phase transform (PHAT)-weighted function and applied it to the orthogonal matching pursuit (OMP) algorithm, named M-PHAT-OMP. The proposed algorithm is more robust, improves the resolution of the time delay and further improves the estimation accuracy of the parameters in the case of motion. Furthermore, with the aim of solving the problem of the difficulty that the traditional OMP algorithm has in determining sparsity, this study proposes a joint-threshold method, where the threshold value serves as the condition for terminating the algorithm’s iteration. The simulation results demonstrate that the M-PHAT-OMP algorithm proposed in this study exhibits a superior performance compared to other algorithms, as evidenced by its lower root mean square error (RMSE) for delay. Moreover, the experimental results also validate that the proposed algorithm has superior robustness and resolution of the time delay in practical applications.
APA, Harvard, Vancouver, ISO, and other styles
48

Welford, J. Kim, Deric Cameron, Erin Gillis, Victoria Mitchell, and Richard Wright. "Crustal structure of the offshore Labrador margin into deep water from combined seismic reflection interpretation and gravity modeling." Interpretation 8, no. 2 (May 1, 2020): SH1—SH17. http://dx.doi.org/10.1190/int-2019-0068.1.

Full text
Abstract:
A regional long-offset 2D seismic reflection program undertaken along the Labrador margin of the Labrador Sea, Canada, and complemented by the acquisition of coincident gravity data, has provided an extensive data set with which to image and model the sparsely investigated outer shelf, slope, and deepwater regions. Previous interpretation of the seismic data revealed the extent of Mesozoic and Cenozoic basins and resulted in the remapping of the basin configuration for the entire margin. To map the synrift package and improve understanding of the geometry and extent of these basins, we have undertaken joint seismic interpretation and gravity forward modeling to reduce uncertainty in the identification of the prerift basement, which varies between Paleozoic shelfal deposits and Precambrian crystalline rocks, with similar density characteristics. With this iterative approach, we have obtained new depth to basement constraints and have deduced further constraints on crustal thickness variations along the Labrador margin. At the crustal scale, extreme localized crustal thinning has been revealed along the southern and central portions of the Labrador margin, whereas a broad, margin-parallel zone of thicker crust has been detected outboard of the continental shelf along the northern Labrador margin. Our final gravity models suggest that Late Cretaceous rift packages from further south extend along the entire Labrador margin and open the possibility of a Late Cretaceous source rock fairway extending into the Labrador basins.
APA, Harvard, Vancouver, ISO, and other styles
49

Price, M. A., J. D. McEwen, L. Pratley, and T. D. Kitching. "Sparse Bayesian mass-mapping with uncertainties: Full sky observations on the celestial sphere." Monthly Notices of the Royal Astronomical Society 500, no. 4 (November 17, 2020): 5436–52. http://dx.doi.org/10.1093/mnras/staa3563.

Full text
Abstract:
ABSTRACT To date weak gravitational lensing surveys have typically been restricted to small fields of view, such that the flat-sky approximation has been sufficiently satisfied. However, with Stage IV surveys (e.g. LSST and Euclid) imminent, extending mass-mapping techniques to the sphere is a fundamental necessity. As such, we extend the sparse hierarchical Bayesian mass-mapping formalism presented in previous work to the spherical sky. For the first time, this allows us to construct maximum a posteriori spherical weak lensing dark-matter mass-maps, with principled Bayesian uncertainties, without imposing or assuming Gaussianty. We solve the spherical mass-mapping inverse problem in the analysis setting adopting a sparsity promoting Laplace-type wavelet prior, though this theoretical framework supports all log-concave posteriors. Our spherical mass-mapping formalism facilitates principled statistical interpretation of reconstructions. We apply our framework to convergence reconstruction on high resolution N-body simulations with pseudo-Euclid masking, polluted with a variety of realistic noise levels, and show a significant increase in reconstruction fidelity compared to standard approaches. Furthermore, we perform the largest joint reconstruction to date of the majority of publicly available shear observational data sets (combining DESY1, KiDS450, and CFHTLens) and find that our formalism recovers a convergence map with significantly enhanced small-scale detail. Within our Bayesian framework we validate, in a statistically rigorous manner, the community’s intuition regarding the need to smooth spherical Kaiser-Squires estimates to provide physically meaningful convergence maps. Such approaches cannot reveal the small-scale physical structures that we recover within our framework.
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Chenming, and Bo Huang. "The Use of Attentive Knowledge Graph Perceptual Propagation for Improving Recommendations." Applied Sciences 13, no. 8 (April 7, 2023): 4667. http://dx.doi.org/10.3390/app13084667.

Full text
Abstract:
Collaborative filtering (CF) usually suffers from data sparsity and cold starts. Knowledge graphs (KGs) are widely used to improve recommendation performance. To verify that knowledge graphs can further alleviate the above problems, this paper proposes an end-to-end framework that uses attentive knowledge graph perceptual propagation for recommendations (AKGP). This framework uses a knowledge graph as a source of auxiliary information to extract user–item interaction information and build a sub-knowledge base. The fusion of structural and contextual information is used to construct fine-grained knowledge graphs via knowledge graph embedding methods and to generate initial embedding representations. Through multi-layer propagation, the structured information and historical preference information are embedded into a unified vector space, and the potential user–item vector representation is expanded. This article used a knowledge perception attention module to achieve feature representation, and finally, the model was optimized using the stratified sampling joint learning method. Compared with the baseline model using MovieLens-1M, Last-FM, Book-Crossing and other data sets, the experimental results demonstrate that the model outperforms state-of-the-art KG-based recommendation methods, and the shortcomings of the existing model are improved. The model was applied to product design data and historical maintenance records provided by an automotive parts manufacturing company. The predictions of the recommended system are matched to the product requirements and possible failure records. This helped reduce costs and increase productivity, helping the company to quickly determine the cause of failures and reduce unplanned downtime.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography