Academic literature on the topic 'Joint sparsity structure'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Joint sparsity structure.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Joint sparsity structure"

1

Huang, Junhao, Weize Sun, and Lei Huang. "Joint Structure and Parameter Optimization of Multiobjective Sparse Neural Network." Neural Computation 33, no. 4 (2021): 1113–43. http://dx.doi.org/10.1162/neco_a_01368.

Full text
Abstract:
This work addresses the problem of network pruning and proposes a novel joint training method based on a multiobjective optimization model. Most of the state-of-the-art pruning methods rely on user experience for selecting the sparsity ratio of the weight matrices or tensors, and thus suffer from severe performance reduction with inappropriate user-defined parameters. Moreover, networks might be inferior due to the inefficient connecting architecture search, especially when it is highly sparse. It is revealed in this work that the network model might maintain sparse characteristic in the early stage of the backpropagation (BP) training process, and evolutionary computation-based algorithms can accurately discover the connecting architecture with satisfying network performance. In particular, we establish a multiobjective sparse model for network pruning and propose an efficient approach that combines BP training and two modified multiobjective evolutionary algorithms (MOEAs). The BP algorithm converges quickly, and the two MOEAs can search for the optimal sparse structure and refine the weights, respectively. Experiments are also included to prove the benefits of the proposed algorithm. We show that the proposed method can obtain a desired Pareto front (PF), leading to a better pruning result comparing to the state-of-the-art methods, especially when the network structure is highly sparse.
APA, Harvard, Vancouver, ISO, and other styles
2

Qin, Si, Yimin D. Zhang, Qisong Wu, and Moeness G. Amin. "Structure-Aware Bayesian Compressive Sensing for Near-Field Source Localization Based on Sensor-Angle Distributions." International Journal of Antennas and Propagation 2015 (2015): 1–15. http://dx.doi.org/10.1155/2015/783467.

Full text
Abstract:
A novel technique for localization of narrowband near-field sources is presented. The technique utilizes the sensor-angle distribution (SAD) that treats the source range and direction-of-arrival (DOA) information as sensor-dependent phase progression. The SAD draws parallel to quadratic time-frequency distributions and, as such, is able to reveal the changes in the spatial frequency over sensor positions. For a moderate source range, the SAD signature is of a polynomial shape, thus simplifying the parameter estimation. Both uniform and sparse linear arrays are considered in this work. To exploit the sparsity and continuity of the SAD signature in the joint space and spatial frequency domain, a modified Bayesian compressive sensing algorithm is exploited to estimate the SAD signature. In this method, a spike-and-slab prior is used to statistically encourage sparsity of the SAD across each segmented SAD region, and a patterned prior is imposed to enforce the continuous structure of the SAD. The results are then mapped back to source range and DOA estimation for source localization. The effectiveness of the proposed technique is verified using simulation results with uniform and sparse linear arrays where the array sensors are located on a grid but with consecutive and missing positions.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Meng, Liang Yan, and Qianying Wang. "Group Sparse Regression-Based Learning Model for Real-Time Depth-Based Human Action Prediction." Mathematical Problems in Engineering 2018 (December 24, 2018): 1–7. http://dx.doi.org/10.1155/2018/8201509.

Full text
Abstract:
This paper addresses the problem of predicting human actions in depth videos. Due to the complex spatiotemporal structure of human actions, it is difficult to infer ongoing human actions before they are fully executed. To handle this challenging issue, we first propose two new depth-based features called pairwise relative joint orientations (PRJOs) and depth patch motion maps (DPMMs) to represent the relative movements between each pair of joints and human-object interactions, respectively. The two proposed depth-based features are suitable for recognizing and predicting human actions in real-time fashion. Then, we propose a regression-based learning approach with a group sparsity inducing regularizer to learn action predictor based on the combination of PRJOs and DPMMs for a sparse set of joints. Experimental results on benchmark datasets have demonstrated that our proposed approach significantly outperforms existing methods for real-time human action recognition and prediction from depth data.
APA, Harvard, Vancouver, ISO, and other styles
4

Birdi, Jasleen, Audrey Repetti, and Yves Wiaux. "Sparse interferometric Stokes imaging under the polarization constraint (Polarized SARA)." Monthly Notices of the Royal Astronomical Society 478, no. 4 (July 4, 2018): 4442–63. http://dx.doi.org/10.1093/mnras/sty1182.

Full text
Abstract:
ABSTRACT We develop a novel algorithm for sparse imaging of Stokes parameters in radio interferometry under the polarization constraint. The latter is a physical non-linear relation between the Stokes parameters, imposing the polarization intensity as a lower bound on the total intensity. To solve the joint inverse Stokes imaging problem including this bound, we leverage epigraphical projection techniques in convex optimization and we design a primal–dual method offering a highly flexible and parallelizable structure. In addition, we propose to regularize each Stokes parameter map through an average sparsity prior in the context of a reweighted analysis approach (SARA). The resulting method is dubbed Polarized SARA. Using simulated observations of M87 with the Event Horizon Telescope, we demonstrate that imposing the polarization constraint leads to superior image quality. For the considered data sets, the results also indicate better performance of the average sparsity prior in comparison with the widely used Cotton–Schwab clean algorithm and other total variation based priors for polarimetric imaging. Our matlab code is available online on GitHub.
APA, Harvard, Vancouver, ISO, and other styles
5

Cao, Meng, Wenxing Bao, and Kewen Qu. "Hyperspectral Super-Resolution Via Joint Regularization of Low-Rank Tensor Decomposition." Remote Sensing 13, no. 20 (October 14, 2021): 4116. http://dx.doi.org/10.3390/rs13204116.

Full text
Abstract:
The hyperspectral image super-resolution (HSI-SR) problem aims at reconstructing the high resolution spatial–spectral information of the scene by fusing low-resolution hyperspectral images (LR-HSI) and the corresponding high-resolution multispectral image (HR-MSI). In order to effectively preserve the spatial and spectral structure of hyperspectral images, a new joint regularized low-rank tensor decomposition method (JRLTD) is proposed for HSI-SR. This model alleviates the problem that the traditional HSI-SR method, based on tensor decomposition, fails to adequately take into account the manifold structure of high-dimensional HR-HSI and is sensitive to outliers and noise. The model first operates on the hyperspectral data using the classical Tucker decomposition to transform the hyperspectral data into the form of a three-mode dictionary multiplied by the core tensor, after which the graph regularization and unidirectional total variational (TV) regularization are introduced to constrain the three-mode dictionary. In addition, we impose the l1-norm on core tensor to characterize the sparsity. While effectively preserving the spatial and spectral structures in the fused hyperspectral images, the presence of anomalous noise values in the images is reduced. In this paper, the hyperspectral image super-resolution problem is transformed into a joint regularization optimization problem based on tensor decomposition and solved by a hybrid framework between the alternating direction multiplier method (ADMM) and the proximal alternate optimization (PAO) algorithm. Experimental results conducted on two benchmark datasets and one real dataset show that JRLTD shows superior performance over state-of-the-art hyperspectral super-resolution algorithms.
APA, Harvard, Vancouver, ISO, and other styles
6

Dai, Ling-Yun, Rong Zhu, and Juan Wang. "Joint Nonnegative Matrix Factorization Based on Sparse and Graph Laplacian Regularization for Clustering and Co-Differential Expression Genes Analysis." Complexity 2020 (November 16, 2020): 1–10. http://dx.doi.org/10.1155/2020/3917812.

Full text
Abstract:
The explosion of multiomics data poses new challenges to existing data mining methods. Joint analysis of multiomics data can make the best of the complementary information that is provided by different types of data. Therefore, they can more accurately explore the biological mechanism of diseases. In this article, two forms of joint nonnegative matrix factorization based on the sparse and graph Laplacian regularization (SG-jNMF) method are proposed. In the method, the graph regularization constraint can preserve the local geometric structure of data. L 2,1 -norm regularization can enhance the sparsity among the rows and remove redundant features in the data. First, SG-jNMF1 projects multiomics data into a common subspace and applies the multiomics fusion characteristic matrix to mine the important information closely related to diseases. Second, multiomics data of the same disease are mapped into the common sample space by SG-jNMF2, and the cluster structures are detected clearly. Experimental results show that SG-jNMF can achieve significant improvement in sample clustering compared with existing joint analysis frameworks. SG-jNMF also effectively integrates multiomics data to identify co-differentially expressed genes (Co-DEGs). SG-jNMF provides an efficient integrative analysis method for mining the biological information hidden in heterogeneous multiomics data.
APA, Harvard, Vancouver, ISO, and other styles
7

Abdulaziz, Abdullah, Arwa Dabbech, and Yves Wiaux. "Wideband super-resolution imaging in Radio Interferometry via low rankness and joint average sparsity models (HyperSARA)." Monthly Notices of the Royal Astronomical Society 489, no. 1 (August 5, 2019): 1230–48. http://dx.doi.org/10.1093/mnras/stz2117.

Full text
Abstract:
ABSTRACT We propose a new approach within the versatile framework of convex optimization to solve the radio-interferometric wideband imaging problem. Our approach, dubbed HyperSARA, leverages low rankness, and joint average sparsity priors to enable formation of high-resolution and high-dynamic range image cubes from visibility data. The resulting minimization problem is solved using a primal-dual algorithm. The algorithmic structure is shipped with highly interesting functionalities such as preconditioning for accelerated convergence, and parallelization enabling to spread the computational cost and memory requirements across a multitude of processing nodes with limited resources. In this work, we provide a proof of concept for wideband image reconstruction of megabyte-size images. The better performance of HyperSARA, in terms of resolution and dynamic range of the formed images, compared to single channel imaging and the clean-based wideband imaging algorithm in the wsclean software, is showcased on simulations and Very Large Array observations. Our matlab code is available online on github.
APA, Harvard, Vancouver, ISO, and other styles
8

Tigges, Timo, Janis Sarikas, Michael Klum, and Reinhold Orglmeister. "Compressed sensing of multi-lead ECG signals by compressive multiplexing." Current Directions in Biomedical Engineering 1, no. 1 (September 1, 2015): 65–68. http://dx.doi.org/10.1515/cdbme-2015-0017.

Full text
Abstract:
AbstractCompressed Sensing has recently been proposed for efficient data compression of multi-lead electrocardiogram recordings within ambulatory patient monitoring applications, e.g. wireless body sensor networks. However, current approaches only focus on signal reconstruction and do not consider the efficient compression of signal ensembles. In this work, we propose the utilization of a compressive multiplexing architecture that facilitates an efficient implementation of hardware compressed sensing for multi-lead ECG signals. For the reconstruction of ECG signal ensembles, we employ an greedy algorithm that exploits their joint sparsity structure. Our simulative study shows promising results which motivate further research in the field of compressive multiplexing for the acquisition multi-lead ECG signals.
APA, Harvard, Vancouver, ISO, and other styles
9

Ge, Ting, Tianming Zhan, Qinfeng Li, and Shanxiang Mu. "Optimal Superpixel Kernel-Based Kernel Low-Rank and Sparsity Representation for Brain Tumour Segmentation." Computational Intelligence and Neuroscience 2022 (June 24, 2022): 1–12. http://dx.doi.org/10.1155/2022/3514988.

Full text
Abstract:
Given the need for quantitative measurement and 3D visualisation of brain tumours, more and more attention has been paid to the automatic segmentation of tumour regions from brain tumour magnetic resonance (MR) images. In view of the uneven grey distribution of MR images and the fuzzy boundaries of brain tumours, a representation model based on the joint constraints of kernel low-rank and sparsity (KLRR-SR) is proposed to mine the characteristics and structural prior knowledge of brain tumour image in the spectral kernel space. In addition, the optimal kernel based on superpixel uniform regions and multikernel learning (MKL) is constructed to improve the accuracy of the pairwise similarity measurement of pixels in the kernel space. By introducing the optimal kernel into KLRR-SR, the coefficient matrix can be solved, which allows brain tumour segmentation results to conform with the spatial information of the image. The experimental results demonstrate that the segmentation accuracy of the proposed method is superior to several existing methods under different indicators and that the sparsity constraint for the coefficient matrix in the kernel space, which is integrated into the kernel low-rank model, has certain effects in preserving the local structure and details of brain tumours.
APA, Harvard, Vancouver, ISO, and other styles
10

Ge, Ting, Ning Mu, Tianming Zhan, Zhi Chen, Wanrong Gao, and Shanxiang Mu. "Brain Lesion Segmentation Based on Joint Constraints of Low-Rank Representation and Sparse Representation." Computational Intelligence and Neuroscience 2019 (July 1, 2019): 1–11. http://dx.doi.org/10.1155/2019/9378014.

Full text
Abstract:
The segmentation of brain lesions from a brain magnetic resonance (MR) image is of great significance for the clinical diagnosis and follow-up treatment. An automatic segmentation method for brain lesions is proposed based on the low-rank representation (LRR) and the sparse representation (SR) theory. The proposed method decomposes the brain image into the background part composed of brain tissue and the brain lesion part. Considering that each pixel in the brain tissue can be represented by the background dictionary, a low-rank representation that incorporates sparsity-inducing regularization term is adopted to model the part. Then, the linearized alternating direction method with adaptive penalty (LADMAP) was selected to solve the model, and the brain lesions can be obtained by the response of the residual matrix. The presented model not only reflects the global structure of the image but also preserves the local information of the pixels, thus improving the representation accuracy. The experimental results on the data of brain tumor patients and multiple sclerosis patients revealed that the proposed method is superior to several existing methods in terms of segmentation accuracy while realizing the segmentation automatically.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Joint sparsity structure"

1

Liu, Penghuan. "Statistical and numerical optimization for speckle blind structured illumination microscopy." Thesis, Ecole centrale de Nantes, 2018. http://www.theses.fr/2018ECDN0008/document.

Full text
Abstract:
La microscopie à éclairements structurés(structured illumination microscopy, SIM) permet de dépasser la limite de résolution en microscopie optique due à la diffraction, en éclairant l’objet avec un ensemble de motifs périodiques parfaitement connus. Cependant, il s’avère difficile de contrôler exactement la forme des motifs éclairants. Qui plus est, de fortes distorsions de la grille de lumière peuvent être générées par l’échantillon lui-même dans le volume d’étude, ce qui peut provoquer de forts artefacts dans les images reconstruites. Récemment, des approches dites blind-SIM ont été proposées, où les images sont acquises à partir de motifs d’éclairement inconnus, non-périodiques, de type speckle,bien plus faciles à générer en pratique. Le pouvoir de super résolution de ces méthodes a été observé, sans forcément être bien compris théoriquement. Cette thèse présente deux nouvelles méthodes de reconstruction en microscopie à éclairements structurés inconnus (blind speckle-SIM) : une approche conjointe et une approche marginale. Dans l’approche conjointe, nous estimons conjointement l’objet et les motifs d’éclairement au moyen d’un modèle de type Basis Pursuit DeNoising (BPDN) avec une régularisation en norme lp,q où p=>1 et 0
Conventional structured illumination microscopy (SIM) can surpass the resolution limit inoptical microscopy caused by the diffraction effect, through illuminating the object with a set of perfectly known harmonic patterns. However, controlling the illumination patterns is a difficult task. Even worse, strongdistortions of the light grid can be induced by the sample within the investigated volume, which may give rise to strong artifacts in SIM reconstructed images. Recently, blind-SIM strategies were proposed, whereimages are acquired through unknown, non-harmonic,speckle illumination patterns, which are much easier to generate in practice. The super-resolution capacity of such approaches was observed, although it was not well understood theoretically. This thesis presents two new reconstruction methods in SIM using unknown speckle patterns (blind-speckle-SIM): one joint reconstruction approach and one marginal reconstruction approach. In the joint reconstruction approach, we estimate the object and the speckle patterns together by considering a basis pursuit denoising (BPDN) model with lp,q-norm regularization, with p=>1 and 0
APA, Harvard, Vancouver, ISO, and other styles
2

Ramesh, Lekshmi. "Support Recovery from Linear Measurements: Tradeoffs in the Measurement-Constrained Regime." Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5500.

Full text
Abstract:
In this thesis, we study problems under the theme of discovering joint sparsity structure in a set of high-dimensional data samples from linear measurements. Our primary focus is on the regime where the number of samples available can potentially be large, but we are constrained to access very few measurements per sample. This setting can be used model high-dimensional estimation tasks in a distributed setting, where storing or communicating more measurements per sample can be expensive. We first study a basic problem in this setting -- that of support recovery from linear measurements. In this problem, a set of n samples in d-dimensional Euclidean space, each having a support of size k, is accessed through m linear measurements per sample. The goal is to recover the unknown support, given knowledge of the m-dimensional measurements and the corresponding measurement matrices. This problem, also sometimes referred to as variable selection or model selection, has been extensively studied in the signal processing and statistics literature, and finds applications in source localization, hyperspectral imaging, heavy hitters detection in networks, and feature selection in regression. It is known that if we have m=\Omega(k \log d) measurements per sample, then even a single sample is sufficient for support recovery. As such, when we have access to multiple samples, an interesting question is whether we can perform recovery with fewer than k measurements per sample. This measurement-constrained setting is relatively less explored in the literature, and the optimal sample-measurement tradeoff was unknown prior to our work. We provide a tight characterization of the sample complexity of this problem, which together with previous results in the literature gives a full understanding of the scaling laws of this problem for different values of the ratio k/m. We propose two algorithms that can perform recovery in the measurement-constrained regime, where standard algorithms fail to work. Our first algorithm is a simple closed-form variance estimation-based procedure, while our second algorithm is based on an approximate maximum likelihood procedure. We show that when m
APA, Harvard, Vancouver, ISO, and other styles
3

"Joint Optimization of Quantization and Structured Sparsity for Compressed Deep Neural Networks." Master's thesis, 2018. http://hdl.handle.net/2286/R.I.50451.

Full text
Abstract:
abstract: Deep neural networks (DNN) have shown tremendous success in various cognitive tasks, such as image classification, speech recognition, etc. However, their usage on resource-constrained edge devices has been limited due to high computation and large memory requirement. To overcome these challenges, recent works have extensively investigated model compression techniques such as element-wise sparsity, structured sparsity and quantization. While most of these works have applied these compression techniques in isolation, there have been very few studies on application of quantization and structured sparsity together on a DNN model. This thesis co-optimizes structured sparsity and quantization constraints on DNN models during training. Specifically, it obtains optimal setting of 2-bit weight and 2-bit activation coupled with 4X structured compression by performing combined exploration of quantization and structured compression settings. The optimal DNN model achieves 50X weight memory reduction compared to floating-point uncompressed DNN. This memory saving is significant since applying only structured sparsity constraints achieves 2X memory savings and only quantization constraints achieves 16X memory savings. The algorithm has been validated on both high and low capacity DNNs and on wide-sparse and deep-sparse DNN models. Experiments demonstrated that deep-sparse DNN outperforms shallow-dense DNN with varying level of memory savings depending on DNN precision and sparsity levels. This work further proposed a Pareto-optimal approach to systematically extract optimal DNN models from a huge set of sparse and dense DNN models. The resulting 11 optimal designs were further evaluated by considering overall DNN memory which includes activation memory and weight memory. It was found that there is only a small change in the memory footprint of the optimal designs corresponding to the low sparsity DNNs. However, activation memory cannot be ignored for high sparsity DNNs.
Dissertation/Thesis
Masters Thesis Computer Engineering 2018
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Joint sparsity structure"

1

Tao, Shaozhe, Yifan Sun, and Daniel Boley. "Inverse Covariance Estimation with Structured Groups." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/395.

Full text
Abstract:
Estimating the inverse covariance matrix of p variables from n observations is challenging when n is much less than p, since the sample covariance matrix is singular and cannot be inverted. A popular solution is to optimize for the L1 penalized estimator; however, this does not incorporate structure domain knowledge and can be expensive to optimize. We consider finding inverse covariance matrices with group structure, defined as potentially overlapping principal submatrices, determined from domain knowledge (e.g. categories or graph cliques). We propose a new estimator for this problem setting that can be derived efficiently via the conditional gradient method, leveraging chordal decomposition theory for scalability. Simulation results show significant improvement in sample complexity when the correct group structure is known. We also apply these estimators to 14,910 stock closing prices, with noticeable improvement when group sparsity is exploited.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Chen, Xutan Peng, Hao Peng, Jianxin Li, and Lihong Wang. "TextGTL: Graph-based Transductive Learning for Semi-supervised Text Classification via Structure-Sensitive Interpolation." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/369.

Full text
Abstract:
Compared with traditional sequential learning models, graph-based neural networks exhibit excellent properties when encoding text, such as the capacity of capturing global and local information simultaneously. Especially in the semi-supervised scenario, propagating information along the edge can effectively alleviate the sparsity of labeled data. In this paper, beyond the existing architecture of heterogeneous word-document graphs, for the first time, we investigate how to construct lightweight non-heterogeneous graphs based on different linguistic information to better serve free text representation learning. Then, a novel semi-supervised framework for text classification that refines graph topology under theoretical guidance and shares information across different text graphs, namely Text-oriented Graph-based Transductive Learning (TextGTL), is proposed. TextGTL also performs attribute space interpolation based on dense substructure in graphs to predict low-entropy labels with high-quality feature nodes for data augmentation. To verify the effectiveness of TextGTL, we conduct extensive experiments on various benchmark datasets, observing significant performance gains over conventional heterogeneous graphs. In addition, we also design ablation studies to dive deep into the validity of components in TextTGL.
APA, Harvard, Vancouver, ISO, and other styles
3

Sun, Fangzheng, Yang Liu, and Hao Sun. "Physics-informed Spline Learning for Nonlinear Dynamics Discovery." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/283.

Full text
Abstract:
Dynamical systems are typically governed by a set of linear/nonlinear differential equations. Distilling the analytical form of these equations from very limited data remains intractable in many disciplines such as physics, biology, climate science, engineering and social science. To address this fundamental challenge, we propose a novel Physics-informed Spline Learning (PiSL) framework to discover parsimonious governing equations for nonlinear dynamics, based on sparsely sampled noisy data. The key concept is to (1) leverage splines to interpolate locally the dynamics, perform analytical differentiation and build the library of candidate terms, (2) employ sparse representation of the governing equations, and (3) use the physics residual in turn to inform the spline learning. The synergy between splines and discovered underlying physics leads to the robust capacity of dealing with high-level data scarcity and noise. A hybrid sparsity-promoting alternating direction optimization strategy is developed for systematically pruning the sparse coefficients that form the structure and explicit expression of the governing equations. The efficacy and superiority of the proposed method have been demonstrated by multiple well-known nonlinear dynamical systems, in comparison with two state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Xu, Jie, Cheng Deng, Xinbo Gao, Dinggang Shen, and Heng Huang. "Predicting Alzheimer's Disease Cognitive Assessment via Robust Low-Rank Structured Sparse Model." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/542.

Full text
Abstract:
Alzheimer's disease (AD) is a neurodegenerative disorder with slow onset, which could result in the deterioration of the duration of persistent neurological dysfunction. How to identify the informative longitudinal phenotypic neuroimaging markers and predict cognitive measures are crucial to recognize AD at early stage. Many existing models related imaging measures to cognitive status using regression models, but they did not take full consideration of the interaction between cognitive scores. In this paper, we propose a robust low-rank structured sparse regression method (RLSR) to address this issue. The proposed model simultaneously selects effective features and learns the underlying structure between cognitive scores by utilizing novel mixed structured sparsity inducing norms and low-rank approximation. In addition, an efficient algorithm is derived to solve the proposed non-smooth objective function with proved convergence. Empirical studies on cognitive data of the ADNI cohort demonstrate the superior performance of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
5

Grüttemeier, Niels, and Christian Komusiewicz. "Learning Bayesian Networks Under Sparsity Constraints: A Parameterized Complexity Analysis." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/586.

Full text
Abstract:
We study the problem of learning the structure of an optimal Bayesian network when additional structural constraints are posed on the network or on its moralized graph. More precisely, we consider the constraint that the moralized graph can be transformed to a graph from a sparse graph class Π by at most k vertex deletions. We show that for Π being the graphs with maximum degree 1, an optimal network can be computed in polynomial time when k is constant, extending previous work that gave an algorithm with such a running time for Π being the class of edgeless graphs [Korhonen & Parviainen, NIPS 2015]. We then show that further extensions or improvements are presumably impossible. For example, we show that when Π is the set of graphs in which each component has size at most three, then learning an optimal network is NP-hard even if k=0. Finally, we show that learning an optimal network with at most k edges in the moralized graph presumably is not fixed-parameter tractable with respect to k and that, in contrast, computing an optimal network with at most k arcs can be computed is fixed-parameter tractable in k.
APA, Harvard, Vancouver, ISO, and other styles
6

Oliveira, Saullo H. G., André R. Gonçalves, and Fernando J. Von Zuben. "Group LASSO with Asymmetric Structure Estimation for Multi-Task Learning." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/444.

Full text
Abstract:
Group LASSO is a widely used regularization that imposes sparsity considering groups of covariates. When used in Multi-Task Learning (MTL) formulations, it makes an underlying assumption that if one group of covariates is not relevant for one or a few tasks, it is also not relevant for all tasks, thus implicitly assuming that all tasks are related. This implication can easily lead to negative transfer if this assumption does not hold for all tasks. Since for most practical applications we hardly know a priori how the tasks are related, several approaches have been conceived in the literature to (i) properly capture the transference structure, (ii) improve interpretability of the tasks interplay, and (iii) penalize potential negative transfer. Recently, the automatic estimation of asymmetric structures inside the learning process was capable of effectively avoiding negative transfer. Our proposal is the first attempt in the literature to conceive a Group LASSO with asymmetric transference formulation, looking for the best of both worlds in a framework that admits the overlap of groups. The resulting optimization problem is solved by an alternating procedure with fast methods. We performed experiments using synthetic and real datasets to compare our proposal with state-of-the-art approaches, evidencing the promising predictive performance and distinguished interpretability of our proposal. The real case study involves the prediction of cognitive scores for Alzheimer's disease progression assessment. The source codes are available at GitHub.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Zhangyang, Shuai Huang, Jiayu Zhou, and Thomas S. Huang. "Doubly Sparsifying Network." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/421.

Full text
Abstract:
We propose the doubly sparsifying network (DSN), by drawing inspirations from the double sparsity model for dictionary learning. DSN emphasizes the joint utilization of both the problem structure and the parameter structure. It simultaneously sparsifies the output features and the learned model parameters, under one unified framework. DSN enjoys intuitive model interpretation, compact model size and low complexity. We compare DSN against a few carefully-designed baselines, to verify its consistently superior performance in a wide range of settings. Encouraged by its robustness to insufficient training data, we explore the applicability of DSN in brain signal processing that has been a challenging interdisciplinary area. DSN is evaluated for two mainstream tasks, electroencephalographic (EEG) signal classification and blood oxygenation level dependent (BOLD) response prediction, both achieving promising results.
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Yanchi, Tan Yan, and Haifeng Chen. "Exploiting Graph Regularized Multi-dimensional Hawkes Processes for Modeling Events with Spatio-temporal Characteristics." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/343.

Full text
Abstract:
Multi-dimensional Hawkes processes (MHP) has been widely used for modeling temporal events. However, when MHP was used for modeling events with spatio-temporal characteristics, the spatial information was often ignored despite its importance. In this paper, we introduce a framework to exploit MHP for modeling spatio-temporal events by considering both temporal and spatial information. Specifically, we design a graph regularization method to effectively integrate the prior spatial structure into MHP for learning influence matrix between different locations. Indeed, the prior spatial structure can be first represented as a connection graph. Then, a multi-view method is utilized for the alignment of the prior connection graph and influence matrix while preserving the sparsity and low-rank properties of the kernel matrix. Moreover, we develop an optimization scheme using an alternating direction method of multipliers to solve the resulting optimization problem. Finally, the experimental results show that we are able to learn the interaction patterns between different geographical areas more effectively with prior connection graph introduced for regularization.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Zihan, Zhaochun Ren, Chunyu He, Peng Zhang, and Yue Hu. "Robust Embedding with Multi-Level Structures for Link Prediction." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/728.

Full text
Abstract:
Knowledge Graph (KG) embedding has become crucial for the task of link prediction. Recent work applies encoder-decoder models to tackle this problem, where an encoder is formulated as a graph neural network (GNN) and a decoder is represented by an embedding method. These approaches enforce embedding techniques with structure information. Unfortunately, existing GNN-based frameworks still confront 3 severe problems: low representational power, stacking in a flat way, and poor robustness to noise. In this work, we propose a novel multi-level graph neural network (M-GNN) to address the above challenges. We first identify an injective aggregate scheme and design a powerful GNN layer using multi-layer perceptrons (MLPs). Then, we define graph coarsening schemes for various kinds of relations, and stack GNN layers on a series of coarsened graphs, so as to model hierarchical structures. Furthermore, attention mechanisms are adopted so that our approach can make predictions accurately even on the noisy knowledge graph. Results on WN18 and FB15k datasets show that our approach is effective in the standard link prediction task, significantly and consistently outperforming competitive baselines. Furthermore, robustness analysis on FB15k-237 dataset demonstrates that our proposed M-GNN is highly robust to sparsity and noise.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhong, Wanjun, Junjie Huang, Qian Liu, Ming Zhou, Jiahai Wang, Jian Yin, and Nan Duan. "Reasoning over Hybrid Chain for Table-and-Text Open Domain Question Answering." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/629.

Full text
Abstract:
Tabular and textual question answering requires systems to perform reasoning over heterogeneous information, considering table structure, and the connections among table and text. In this paper, we propose a ChAin-centric Reasoning and Pre-training framework (CARP). CARP utilizes hybrid chain to model the explicit intermediate reasoning process across table and text for question answering. We also propose a novel chain-centric pre-training method, to enhance the pre-trained model in identifying the cross-modality reasoning process and alleviating the data sparsity problem. This method constructs the large-scale reasoning corpus by synthesizing pseudo heterogeneous reasoning paths from Wikipedia and generating corresponding questions. We evaluate our system on OTT-QA, a large-scale table-and-text open-domain question answering benchmark, and our system achieves the state-of-the-art performance. Further analyses illustrate that the explicit hybrid chain offers substantial performance improvement and interpretablity of the intermediate reasoning process, and the chain-centric pre-training boosts the performance on the chain extraction.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!