Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Low-Rank Tensor.

Artykuły w czasopismach na temat „Low-Rank Tensor”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Low-Rank Tensor”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Zhong, Guoqiang, i Mohamed Cheriet. "Large Margin Low Rank Tensor Analysis". Neural Computation 26, nr 4 (kwiecień 2014): 761–80. http://dx.doi.org/10.1162/neco_a_00570.

Pełny tekst źródła
Streszczenie:
We present a supervised model for tensor dimensionality reduction, which is called large margin low rank tensor analysis (LMLRTA). In contrast to traditional vector representation-based dimensionality reduction methods, LMLRTA can take any order of tensors as input. And unlike previous tensor dimensionality reduction methods, which can learn only the low-dimensional embeddings with a priori specified dimensionality, LMLRTA can automatically and jointly learn the dimensionality and the low-dimensional representations from data. Moreover, LMLRTA delivers low rank projection matrices, while it encourages data of the same class to be close and of different classes to be separated by a large margin of distance in the low-dimensional tensor space. LMLRTA can be optimized using an iterative fixed-point continuation algorithm, which is guaranteed to converge to a local optimal solution of the optimization problem. We evaluate LMLRTA on an object recognition application, where the data are represented as 2D tensors, and a face recognition application, where the data are represented as 3D tensors. Experimental results show the superiority of LMLRTA over state-of-the-art approaches.
Style APA, Harvard, Vancouver, ISO itp.
2

Liu, Hongyi, Hanyang Li, Zebin Wu i Zhihui Wei. "Hyperspectral Image Recovery Using Non-Convex Low-Rank Tensor Approximation". Remote Sensing 12, nr 14 (15.07.2020): 2264. http://dx.doi.org/10.3390/rs12142264.

Pełny tekst źródła
Streszczenie:
Low-rank tensors have received more attention in hyperspectral image (HSI) recovery. Minimizing the tensor nuclear norm, as a low-rank approximation method, often leads to modeling bias. To achieve an unbiased approximation and improve the robustness, this paper develops a non-convex relaxation approach for low-rank tensor approximation. Firstly, a non-convex approximation of tensor nuclear norm (NCTNN) is introduced to the low-rank tensor completion. Secondly, a non-convex tensor robust principal component analysis (NCTRPCA) method is proposed, which aims at exactly recovering a low-rank tensor corrupted by mixed-noise. The two proposed models are solved efficiently by the alternating direction method of multipliers (ADMM). Three HSI datasets are employed to exhibit the superiority of the proposed model over the low rank penalization method in terms of accuracy and robustness.
Style APA, Harvard, Vancouver, ISO itp.
3

Zhou, Pan, Canyi Lu, Zhouchen Lin i Chao Zhang. "Tensor Factorization for Low-Rank Tensor Completion". IEEE Transactions on Image Processing 27, nr 3 (marzec 2018): 1152–63. http://dx.doi.org/10.1109/tip.2017.2762595.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

He, Yicong, i George K. Atia. "Multi-Mode Tensor Space Clustering Based on Low-Tensor-Rank Representation". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 6 (28.06.2022): 6893–901. http://dx.doi.org/10.1609/aaai.v36i6.20646.

Pełny tekst źródła
Streszczenie:
Traditional subspace clustering aims to cluster data lying in a union of linear subspaces. The vectorization of high-dimensional data to 1-D vectors to perform clustering ignores much of the structure intrinsic to such data. To preserve said structure, in this work we exploit clustering in a high-order tensor space rather than a vector space. We develop a novel low-tensor-rank representation (LTRR) for unfolded matrices of tensor data lying in a low-rank tensor space. The representation coefficient matrix of an unfolding matrix is tensorized to a 3-order tensor, and the low-tensor-rank constraint is imposed on the transformed coefficient tensor to exploit the self-expressiveness property. Then, inspired by the multi-view clustering framework, we develop a multi-mode tensor space clustering algorithm (MMTSC) that can deal with tensor space clustering with or without missing entries. The tensor is unfolded along each mode, and the coefficient matrices are obtained for each unfolded matrix. The low tensor rank constraint is imposed on a tensor combined from transformed coefficient tensors of each mode, such that the proposed method can simultaneously capture the low rank property for the data within each tensor space and maintain cluster consistency across different modes. Experimental results demonstrate that the proposed MMTSC algorithm can outperform existing clustering algorithms in many cases.
Style APA, Harvard, Vancouver, ISO itp.
5

Liu, Xiaohua, i Guijin Tang. "Color Image Restoration Using Sub-Image Based Low-Rank Tensor Completion". Sensors 23, nr 3 (3.02.2023): 1706. http://dx.doi.org/10.3390/s23031706.

Pełny tekst źródła
Streszczenie:
Many restoration methods use the low-rank constraint of high-dimensional image signals to recover corrupted images. These signals are usually represented by tensors, which can maintain their inherent relevance. The image of this simple tensor presentation has a certain low-rank property, but does not have a strong low-rank property. In order to enhance the low-rank property, we propose a novel method called sub-image based low-rank tensor completion (SLRTC) for image restoration. We first sample a color image to obtain sub-images, and adopt these sub-images instead of the original single image to form a tensor. Then we conduct the mode permutation on this tensor. Next, we exploit the tensor nuclear norm defined based on the tensor-singular value decomposition (t-SVD) to build the low-rank completion model. Finally, we perform the tensor-singular value thresholding (t-SVT) based the standard alternating direction method of multipliers (ADMM) algorithm to solve the aforementioned model. Experimental results have shown that compared with the state-of-the-art tensor completion techniques, the proposed method can provide superior results in terms of objective and subjective assessment.
Style APA, Harvard, Vancouver, ISO itp.
6

Jiang, Yuanxiang, Qixiang Zhang, Zhanjiang Yuan i Chen Wang. "Convex Robust Recovery of Corrupted Tensors via Tensor Singular Value Decomposition and Local Low-Rank Approximation". Journal of Physics: Conference Series 2670, nr 1 (1.12.2023): 012026. http://dx.doi.org/10.1088/1742-6596/2670/1/012026.

Pełny tekst źródła
Streszczenie:
Abstract This paper discusses the recovery of tensor data corrupted by random noise. Our approach assumes that the potential structure of data is a linear combination of several low-rank tensor subspaces. The goal is to recover exactly these local low-rank tensors and remove random noise as much as possible. Non-parametric kernel smoothing technique is employed to establish an effective mathematical notion of local models. After that, each local model can be robustly separated into a low-rank tensor and a sparse tensor. The low-rank tensor can be recovered by minimizing a weighted combination of the norm and the tensor nuclear norm (TNN) obtained as the tightest convex relaxation of tensor multi-linear rank defined in Tensor Singular Value Decomposition (TSVD). Numerical simulation experiments verify that our proposed approach can effectively denoise the tensor data such as color images with random noise and has superior performance compared to the existing methods.
Style APA, Harvard, Vancouver, ISO itp.
7

Yu, Shicheng, Jiaqing Miao, Guibing Li, Weidong Jin, Gaoping Li i Xiaoguang Liu. "Tensor Completion via Smooth Rank Function Low-Rank Approximate Regularization". Remote Sensing 15, nr 15 (3.08.2023): 3862. http://dx.doi.org/10.3390/rs15153862.

Pełny tekst źródła
Streszczenie:
In recent years, the tensor completion algorithm has played a vital part in the reconstruction of missing elements within high-dimensional remote sensing image data. Due to the difficulty of tensor rank computation, scholars have proposed many substitutions of tensor rank. By introducing the smooth rank function (SRF), this paper proposes a new tensor rank nonconvex substitution function that performs adaptive weighting on different singular values to avoid the performance deficiency caused by the equal treatment of all singular values. On this basis, a novel tensor completion model that minimizes the SRF as the objective function is proposed. The proposed model is efficiently solved by adding the hot start method to the alternating direction multiplier method (ADMM) framework. Extensive experiments are carried out in this paper to demonstrate the resilience of the proposed model to missing data. The results illustrate that the proposed model is superior to other advanced models in tensor completeness.
Style APA, Harvard, Vancouver, ISO itp.
8

Nie, Jiawang. "Low Rank Symmetric Tensor Approximations". SIAM Journal on Matrix Analysis and Applications 38, nr 4 (styczeń 2017): 1517–40. http://dx.doi.org/10.1137/16m1107528.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Mickelin, Oscar, i Sertac Karaman. "Multiresolution Low-rank Tensor Formats". SIAM Journal on Matrix Analysis and Applications 41, nr 3 (styczeń 2020): 1086–114. http://dx.doi.org/10.1137/19m1284579.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Gong, Xiao, Wei Chen, Jie Chen i Bo Ai. "Tensor Denoising Using Low-Rank Tensor Train Decomposition". IEEE Signal Processing Letters 27 (2020): 1685–89. http://dx.doi.org/10.1109/lsp.2020.3025038.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Chen, Xi’ai, Zhen Wang, Kaidong Wang, Huidi Jia, Zhi Han i Yandong Tang. "Multi-Dimensional Low-Rank with Weighted Schatten p-Norm Minimization for Hyperspectral Anomaly Detection". Remote Sensing 16, nr 1 (24.12.2023): 74. http://dx.doi.org/10.3390/rs16010074.

Pełny tekst źródła
Streszczenie:
Hyperspectral anomaly detection is an important unsupervised binary classification problem that aims to effectively distinguish between background and anomalies in hyperspectral images (HSIs). In recent years, methods based on low-rank tensor representations have been proposed to decompose HSIs into low-rank background and sparse anomaly tensors. However, current methods neglect the low-rank information in the spatial dimension and rely heavily on the background information contained in the dictionary. Furthermore, these algorithms show limited robustness when the dictionary information is missing or corrupted by high level noise. To address these problems, we propose a novel method called multi-dimensional low-rank (MDLR) for HSI anomaly detection. It first reconstructs three background tensors separately from three directional slices of the background tensor. Then, weighted schatten p-norm minimization is employed to enforce the low-rank constraint on the background tensor, and LF,1-norm regularization is used to describe the sparsity in the anomaly tensor. Finally, a well-designed alternating direction method of multipliers (ADMM) is employed to effectively solve the optimization problem. Extensive experiments on four real-world datasets show that our approach outperforms existing anomaly detection methods in terms of accuracy.
Style APA, Harvard, Vancouver, ISO itp.
12

Sobolev, Konstantin, Dmitry Ermilov, Anh-Huy Phan i Andrzej Cichocki. "PARS: Proxy-Based Automatic Rank Selection for Neural Network Compression via Low-Rank Weight Approximation". Mathematics 10, nr 20 (14.10.2022): 3801. http://dx.doi.org/10.3390/math10203801.

Pełny tekst źródła
Streszczenie:
Low-rank matrix/tensor decompositions are promising methods for reducing the inference time, computation, and memory consumption of deep neural networks (DNNs). This group of methods decomposes the pre-trained neural network weights through low-rank matrix/tensor decomposition and replaces the original layers with lightweight factorized layers. A main drawback of the technique is that it demands a great amount of time and effort to select the best ranks of tensor decomposition for each layer in a DNN. This paper proposes a Proxy-based Automatic tensor Rank Selection method (PARS) that utilizes a Bayesian optimization approach to find the best combination of ranks for neural network (NN) compression. We observe that the decomposition of weight tensors adversely influences the feature distribution inside the neural network and impairs the predictability of the post-compression DNN performance. Based on this finding, a novel proxy metric is proposed to deal with the abovementioned issue and to increase the quality of the rank search procedure. Experimental results show that PARS improves the results of existing decomposition methods on several representative NNs, including ResNet-18, ResNet-56, VGG-16, and AlexNet. We obtain a 3× FLOP reduction with almost no loss of accuracy for ILSVRC-2012ResNet-18 and a 5.5× FLOP reduction with an accuracy improvement for ILSVRC-2012 VGG-16.
Style APA, Harvard, Vancouver, ISO itp.
13

Sun, Li, i Bing Song. "Data Recovery Technology Based on Subspace Clustering". Scientific Programming 2022 (20.07.2022): 1–6. http://dx.doi.org/10.1155/2022/1920933.

Pełny tekst źródła
Streszczenie:
High-dimensional data usually exist asymptotically in low-dimensional space. In this study, we mainly use tensor t-product as a tool to propose new algorithms in data clustering and recovery and verify them on classical data sets. This study defines the “singular values” of tensors, adopts a weighting strategy for the singular values, and proposes a tensor-weighted kernel norm minimization robust principal component analysis method, which is used to restore low-probability low-rank third-order tensor data. Experiments on synthetic data show that in the recovery of strictly low-rank data, the tensor method and weighting strategy can also obtain more accurate recovery when the rank is relatively large, which improves the volume of the rank. The proposed method combines the two and reflects its superiority through the restoration of 500 images under a small probability noise level.
Style APA, Harvard, Vancouver, ISO itp.
14

Bachmayr, Markus, i Vladimir Kazeev. "Stability of Low-Rank Tensor Representations and Structured Multilevel Preconditioning for Elliptic PDEs". Foundations of Computational Mathematics 20, nr 5 (23.01.2020): 1175–236. http://dx.doi.org/10.1007/s10208-020-09446-z.

Pełny tekst źródła
Streszczenie:
Abstract Folding grid value vectors of size $$2^L$$ 2 L into Lth-order tensors of mode size $$2\times \cdots \times 2$$ 2 × ⋯ × 2 , combined with low-rank representation in the tensor train format, has been shown to result in highly efficient approximations for various classes of functions. These include solutions of elliptic PDEs on nonsmooth domains or with oscillatory data. This tensor-structured approach is attractive because it leads to highly compressed, adaptive approximations based on simple discretizations. Standard choices of the underlying bases, such as piecewise multilinear finite elements on uniform tensor product grids, entail the well-known matrix ill-conditioning of discrete operators. We demonstrate that, for low-rank representations, the use of tensor structure itself additionally introduces representation ill-conditioning, a new effect specific to computations in tensor networks. We analyze the tensor structure of a BPX preconditioner for a second-order linear elliptic operator and construct an explicit tensor-structured representation of the preconditioner, with ranks independent of the number L of discretization levels. The straightforward application of the preconditioner yields discrete operators whose matrix conditioning is uniform with respect to the discretization parameter, but in decompositions that suffer from representation ill-conditioning. By additionally eliminating certain redundancies in the representations of the preconditioned discrete operators, we obtain reduced-rank decompositions that are free of both matrix and representation ill-conditioning. For an iterative solver based on soft thresholding of low-rank tensors, we obtain convergence and complexity estimates and demonstrate its reliability and efficiency for discretizations with up to $$2^{50}$$ 2 50 nodes in each dimension.
Style APA, Harvard, Vancouver, ISO itp.
15

Shcherbakova, Elena M., Sergey A. Matveev, Alexander P. Smirnov i Eugene E. Tyrtyshnikov. "Study of performance of low-rank nonnegative tensor factorization methods". Russian Journal of Numerical Analysis and Mathematical Modelling 38, nr 4 (1.08.2023): 231–39. http://dx.doi.org/10.1515/rnam-2023-0018.

Pełny tekst źródła
Streszczenie:
Abstract In the present paper we compare two different iterative approaches to constructing nonnegative tensor train and Tucker decompositions. The first approach is based on idea of alternating projections and randomized sketching for factorization of tensors with nonnegative elements. This approach can be useful for both TT and Tucker formats. The second approach consists of two stages. At the first stage we find the unconstrained tensor train decomposition for the target array. At the second stage we use this initial approximation in order to fix it within moderate number of operations and obtain the factorization with nonnegative factors either in tensor train or Tucker model. We study the performance of these methods for both synthetic data and hyper-spectral image and demonstrate the clear advantage of the latter technique in terms of computational time and wider range of possible applications.
Style APA, Harvard, Vancouver, ISO itp.
16

Du, Shiqiang, Yuqing Shi, Guangrong Shan, Weilan Wang i Yide Ma. "Tensor low-rank sparse representation for tensor subspace learning". Neurocomputing 440 (czerwiec 2021): 351–64. http://dx.doi.org/10.1016/j.neucom.2021.02.002.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Cai, Bing, i Gui-Fu Lu. "Tensor subspace clustering using consensus tensor low-rank representation". Information Sciences 609 (wrzesień 2022): 46–59. http://dx.doi.org/10.1016/j.ins.2022.07.049.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

李, 鸿燕. "Double Factor Tensor Norm Regularized Low Rank Tensor Completion". Advances in Applied Mathematics 11, nr 10 (2022): 6908–14. http://dx.doi.org/10.12677/aam.2022.1110732.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Jiang, Bo, Shiqian Ma i Shuzhong Zhang. "Low-M-Rank Tensor Completion and Robust Tensor PCA". IEEE Journal of Selected Topics in Signal Processing 12, nr 6 (grudzień 2018): 1390–404. http://dx.doi.org/10.1109/jstsp.2018.2873144.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Zheng, Yu-Bang, Ting-Zhu Huang, Xi-Le Zhao, Tai-Xiang Jiang, Teng-Yu Ji i Tian-Hui Ma. "Tensor N-tubal rank and its convex relaxation for low-rank tensor recovery". Information Sciences 532 (wrzesień 2020): 170–89. http://dx.doi.org/10.1016/j.ins.2020.05.005.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

马, 婷婷. "Enhanced Low Rank Tensor Approximation Algorithm". Advances in Applied Mathematics 08, nr 08 (2019): 1336–40. http://dx.doi.org/10.12677/aam.2019.88157.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Huang, Huyan, Yipeng Liu, Zhen Long i Ce Zhu. "Robust Low-Rank Tensor Ring Completion". IEEE Transactions on Computational Imaging 6 (2020): 1117–26. http://dx.doi.org/10.1109/tci.2020.3006718.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Zhang, Anru. "Cross: Efficient low-rank tensor completion". Annals of Statistics 47, nr 2 (kwiecień 2019): 936–64. http://dx.doi.org/10.1214/18-aos1694.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Su, Yaru, Xiaohui Wu i Genggeng Liu. "Nonconvex Low Tubal Rank Tensor Minimization". IEEE Access 7 (2019): 170831–43. http://dx.doi.org/10.1109/access.2019.2956115.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Wang, Andong, Zhihui Lai i Zhong Jin. "Noisy low-tubal-rank tensor completion". Neurocomputing 330 (luty 2019): 267–79. http://dx.doi.org/10.1016/j.neucom.2018.11.012.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Guo, Kailing, Tong Zhang, Xiangmin Xu i Xiaofen Xing. "Low-Rank Tensor Thresholding Ridge Regression". IEEE Access 7 (2019): 153761–72. http://dx.doi.org/10.1109/access.2019.2944426.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Tan, Huachun, Jianshuai Feng, Zhengdong Chen, Fan Yang i Wuhong Wang. "Low Multilinear Rank Approximation of Tensors and Application in Missing Traffic Data". Advances in Mechanical Engineering 6 (1.01.2014): 157597. http://dx.doi.org/10.1155/2014/157597.

Pełny tekst źródła
Streszczenie:
The problem of missing data in multiway arrays (i.e., tensors) is common in many fields such as bibliographic data analysis, image processing, and computer vision. We consider the problems of approximating a tensor by another tensor with low multilinear rank in the presence of missing data and possibly reconstructing it (i.e., tensor completion). In this paper, we propose a weighted Tucker model which models only the known elements for capturing the latent structure of the data and reconstructing the missing elements. To treat the nonuniqueness of the proposed weighted Tucker model, a novel gradient descent algorithm based on a Grassmann manifold, which is termed Tucker weighted optimization (Tucker-Wopt), is proposed for guaranteeing the global convergence to a local minimum of the problem. Based on extensive experiments, Tucker-Wopt is shown to successfully reconstruct tensors with noise and up to 95% missing data. Furthermore, the experiments on traffic flow volume data demonstrate the usefulness of our algorithm on real-world application.
Style APA, Harvard, Vancouver, ISO itp.
28

Zhu, Yada, Jingrui He i Rick Lawrence. "Hierarchical Modeling with Tensor Inputs". Proceedings of the AAAI Conference on Artificial Intelligence 26, nr 1 (20.09.2021): 1233–39. http://dx.doi.org/10.1609/aaai.v26i1.8283.

Pełny tekst źródła
Streszczenie:
In many real applications, the input data are naturally expressed as tensors, such as virtual metrology in semiconductor manufacturing, face recognition and gait recognition in computer vision, etc. In this paper, we propose a general optimization framework for dealing with tensor inputs. Most existing methods for supervised tensor learning use only rank-one weight tensors in the linear model and cannot readily incorporate domain knowledge. In our framework, we obtain the weight tensor in a hierarchical way — we first approximate it by a low-rank tensor, and then estimate the low-rank approximation using the prior knowledge from various sources, e.g., different domain experts. This is motivated by wafer quality prediction in semiconductor manufacturing. Furthermore, we propose an effective algorithm named H-MOTE for solving this framework, which is guaranteed to converge. The time complexity of H-MOTE is linear with respect to the number of examples as well as the size of the weight tensor. Experimental results show the superiority of H-MOTE over state-of-the-art techniques on both synthetic and real data sets.
Style APA, Harvard, Vancouver, ISO itp.
29

He, Jingfei, Xunan Zheng, Peng Gao i Yatong Zhou. "Low-rank tensor completion based on tensor train rank with partially overlapped sub-blocks". Signal Processing 190 (styczeń 2022): 108339. http://dx.doi.org/10.1016/j.sigpro.2021.108339.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Liu, Yipeng, Jiani Liu i Ce Zhu. "Low-Rank Tensor Train Coefficient Array Estimation for Tensor-on-Tensor Regression". IEEE Transactions on Neural Networks and Learning Systems 31, nr 12 (grudzień 2020): 5402–11. http://dx.doi.org/10.1109/tnnls.2020.2967022.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Jyothula, Sunil Kumar, i Jaya Chandra Prasad Talari. "An Efficient Transform based Low Rank Tensor Completion to Extreme Visual Recovery". Indian Journal of Science and Technology 15, nr 14 (11.04.2022): 608–18. http://dx.doi.org/10.17485/ijst/v15i14.264.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Dong, Le, i Yuan Yuan. "Sparse Constrained Low Tensor Rank Representation Framework for Hyperspectral Unmixing". Remote Sensing 13, nr 8 (11.04.2021): 1473. http://dx.doi.org/10.3390/rs13081473.

Pełny tekst źródła
Streszczenie:
Recently, non-negative tensor factorization (NTF) as a very powerful tool has attracted the attention of researchers. It is used in the unmixing of hyperspectral images (HSI) due to its excellent expression ability without any information loss when describing data. However, most of the existing unmixing methods based on NTF fail to fully explore the unique properties of data, for example, low rank, that exists in both the spectral and spatial domains. To explore this low-rank structure, in this paper we learn the different low-rank representations of HSI in the spectral, spatial and non-local similarity modes. Firstly, HSI is divided into many patches, and these patches are clustered multiple groups according to the similarity. Each similarity group can constitute a 4-D tensor, including two spatial modes, a spectral mode and a non-local similarity mode, which has strong low-rank properties. Secondly, a low-rank regularization with logarithmic function is designed and embedded in the NTF framework, which simulates the spatial, spectral and non-local similarity modes of these 4-D tensors. In addition, the sparsity of the abundance tensor is also integrated into the unmixing framework to improve the unmixing performance through the L2,1 norm. Experiments on three real data sets illustrate the stability and effectiveness of our algorithm compared with five state-of-the-art methods.
Style APA, Harvard, Vancouver, ISO itp.
33

Ghadermarzy, Navid, Yaniv Plan i Özgür Yilmaz. "Near-optimal sample complexity for convex tensor completion". Information and Inference: A Journal of the IMA 8, nr 3 (23.11.2018): 577–619. http://dx.doi.org/10.1093/imaiai/iay019.

Pełny tekst źródła
Streszczenie:
Abstract We study the problem of estimating a low-rank tensor when we have noisy observations of a subset of its entries. A rank-$r$, order-$d$, $N \times N \times \cdots \times N$ tensor, where $r=O(1)$, has $O(dN)$ free variables. On the other hand, prior to our work, the best sample complexity that was achieved in the literature is $O\left(N^{\frac{d}{2}}\right)$, obtained by solving a tensor nuclear-norm minimization problem. In this paper, we consider the ‘M-norm’, an atomic norm whose atoms are rank-1 sign tensors. We also consider a generalization of the matrix max-norm to tensors, which results in a quasi-norm that we call ‘max-qnorm’. We prove that solving an M-norm constrained least squares (LS) problem results in nearly optimal sample complexity for low-rank tensor completion (TC). A similar result holds for max-qnorm as well. Furthermore, we show that these bounds are nearly minimax rate-optimal. We also provide promising numerical results for max-qnorm constrained TC, showing improved recovery compared to matricization and alternating LS.
Style APA, Harvard, Vancouver, ISO itp.
34

Chen, Chuan, Zhe-Bin Wu, Zi-Tai Chen, Zi-Bin Zheng i Xiong-Jun Zhang. "Auto-weighted robust low-rank tensor completion via tensor-train". Information Sciences 567 (sierpień 2021): 100–115. http://dx.doi.org/10.1016/j.ins.2021.03.025.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Liu, Chunsheng, Hong Shan i Chunlei Chen. "Tensor p-shrinkage nuclear norm for low-rank tensor completion". Neurocomputing 387 (kwiecień 2020): 255–67. http://dx.doi.org/10.1016/j.neucom.2020.01.009.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Yang, Jing-Hua, Xi-Le Zhao, Teng-Yu Ji, Tian-Hui Ma i Ting-Zhu Huang. "Low-rank tensor train for tensor robust principal component analysis". Applied Mathematics and Computation 367 (luty 2020): 124783. http://dx.doi.org/10.1016/j.amc.2019.124783.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Zhang, Zhao, Cheng Ding, Zhisheng Gao i Chunzhi Xie. "ANLPT: Self-Adaptive and Non-Local Patch-Tensor Model for Infrared Small Target Detection". Remote Sensing 15, nr 4 (12.02.2023): 1021. http://dx.doi.org/10.3390/rs15041021.

Pełny tekst źródła
Streszczenie:
Infrared small target detection is widely used for early warning, aircraft monitoring, ship monitoring, and so on, which requires the small target and its background to be represented and modeled effectively to achieve their complete separation. Low-rank sparse decomposition based on the structural features of infrared images has attracted much attention among many algorithms because of its good interpretability. Based on our study, we found some shortcomings in existing baseline methods, such as redundancy of constructing tensors and fixed compromising factors. A self-adaptive low-rank sparse tensor decomposition model for infrared dim small target detection is proposed in this paper. In this model, the entropy of image block is used for fast matching of non-local similar blocks to construct a better sparse tensor for small targets. An adaptive strategy of low-rank sparse tensor decomposition is proposed for different background environments, which adaptively determines the weight coefficient to achieve effective separation of background and small targets in different background environments. Tensor robust principal component analysis (TRPCA) was applied to achieve low-rank sparse tensor decomposition to reconstruct small targets and their backgrounds separately. Sufficient experiments on the various types data sets show that the proposed method is competitive.
Style APA, Harvard, Vancouver, ISO itp.
38

Shi, Qiquan, Jiaming Yin, Jiajun Cai, Andrzej Cichocki, Tatsuya Yokota, Lei Chen, Mingxuan Yuan i Jia Zeng. "Block Hankel Tensor ARIMA for Multiple Short Time Series Forecasting". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 5758–66. http://dx.doi.org/10.1609/aaai.v34i04.6032.

Pełny tekst źródła
Streszczenie:
This work proposes a novel approach for multiple time series forecasting. At first, multi-way delay embedding transform (MDT) is employed to represent time series as low-rank block Hankel tensors (BHT). Then, the higher-order tensors are projected to compressed core tensors by applying Tucker decomposition. At the same time, the generalized tensor Autoregressive Integrated Moving Average (ARIMA) is explicitly used on consecutive core tensors to predict future samples. In this manner, the proposed approach tactically incorporates the unique advantages of MDT tensorization (to exploit mutual correlations) and tensor ARIMA coupled with low-rank Tucker decomposition into a unified framework. This framework exploits the low-rank structure of block Hankel tensors in the embedded space and captures the intrinsic correlations among multiple TS, which thus can improve the forecasting results, especially for multiple short time series. Experiments conducted on three public datasets and two industrial datasets verify that the proposed BHT-ARIMA effectively improves forecasting accuracy and reduces computational cost compared with the state-of-the-art methods.
Style APA, Harvard, Vancouver, ISO itp.
39

Mohaoui, S., K. El Qate, A. Hakim i S. Raghay. "Low-rank tensor completion using nonconvex total variation". Mathematical Modeling and Computing 9, nr 2 (2022): 365–74. http://dx.doi.org/10.23939/mmc2022.02.365.

Pełny tekst źródła
Streszczenie:
In this work, we study the tensor completion problem in which the main point is to predict the missing values in visual data. To greatly benefit from the smoothness structure and edge-preserving property in visual images, we suggest a tensor completion model that seeks gradient sparsity via the l0-norm. The proposal combines the low-rank matrix factorization which guarantees the low-rankness property and the nonconvex total variation (TV). We present several experiments to demonstrate the performance of our model compared with popular tensor completion methods in terms of visual and quantitative measures.
Style APA, Harvard, Vancouver, ISO itp.
40

Jia, Yuheng, Hui Liu, Junhui Hou i Qingfu Zhang. "Clustering Ensemble Meets Low-rank Tensor Approximation". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 9 (18.05.2021): 7970–78. http://dx.doi.org/10.1609/aaai.v35i9.16972.

Pełny tekst źródła
Streszczenie:
This paper explores the problem of clustering ensemble, which aims to combine multiple base clusterings to produce better performance than that of the individual one. The existing clustering ensemble methods generally construct a co-association matrix, which indicates the pairwise similarity between samples, as the weighted linear combination of the connective matrices from different base clusterings, and the resulting co-association matrix is then adopted as the input of an off-the-shelf clustering algorithm, e.g., spectral clustering. However, the co-association matrix may be dominated by poor base clusterings, resulting in inferior performance. In this paper, we propose a novel low-rank tensor approximation based method to solve the problem from a global perspective. Specifically, by inspecting whether two samples are clustered to an identical cluster under different base clusterings, we derive a coherent-link matrix, which contains limited but highly reliable relationships between samples. We then stack the coherent-link matrix and the co-association matrix to form a three-dimensional tensor, the low-rankness property of which is further explored to propagate the information of the coherent-link matrix to the co-association matrix, producing a refined co-association matrix. We formulate the proposed method as a convex constrained optimization problem and solve it efficiently. Experimental results over 7 benchmark data sets show that the proposed model achieves a breakthrough in clustering performance, compared with 12 state-of-the-art methods. To the best of our knowledge, this is the first work to explore the potential of low-rank tensor on clustering ensemble, which is fundamentally different from previous approaches. Last but not least, our method only contains one parameter, which can be easily tuned.
Style APA, Harvard, Vancouver, ISO itp.
41

Suzuki, Taiji, i Heishiro Kanagawa. "Bayes method for low rank tensor estimation". Journal of Physics: Conference Series 699 (marzec 2016): 012020. http://dx.doi.org/10.1088/1742-6596/699/1/012020.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Kadmon, Jonathan, i Surya Ganguli. "Statistical mechanics of low-rank tensor decomposition". Journal of Statistical Mechanics: Theory and Experiment 2019, nr 12 (20.12.2019): 124016. http://dx.doi.org/10.1088/1742-5468/ab3216.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Kressner, Daniel, Michael Steinlechner i Bart Vandereycken. "Low-rank tensor completion by Riemannian optimization". BIT Numerical Mathematics 54, nr 2 (7.11.2013): 447–68. http://dx.doi.org/10.1007/s10543-013-0455-z.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Xie, Ting, Shutao Li, Leyuan Fang i Licheng Liu. "Tensor Completion via Nonlocal Low-Rank Regularization". IEEE Transactions on Cybernetics 49, nr 6 (czerwiec 2019): 2344–54. http://dx.doi.org/10.1109/tcyb.2018.2825598.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Sohrabi Bonab, Zahra, i Mohammad B. Shamsollahi. "Low-rank Tensor Restoration for ERP extraction". Biomedical Signal Processing and Control 87 (styczeń 2024): 105379. http://dx.doi.org/10.1016/j.bspc.2023.105379.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

王, 香懿. "Improved Robust Low-Rank Regularization Tensor Completion". Advances in Applied Mathematics 11, nr 11 (2022): 7647–52. http://dx.doi.org/10.12677/aam.2022.1111809.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Wang, Xiangyi, i Wei Jiang. "Improved Robust Low-Rank Regularization Tensor Completion". OALib 09, nr 11 (2022): 1–25. http://dx.doi.org/10.4236/oalib.1109425.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Hosono, Kaito, Shunsuke Ono i Takamichi Miyata. "On the Synergy between Nonconvex Extensions of the Tensor Nuclear Norm for Tensor Recovery". Signals 2, nr 1 (18.02.2021): 108–21. http://dx.doi.org/10.3390/signals2010010.

Pełny tekst źródła
Streszczenie:
Low-rank tensor recovery has attracted much attention among various tensor recovery approaches. A tensor rank has several definitions, unlike the matrix rank—e.g., the CP rank and the Tucker rank. Many low-rank tensor recovery methods are focused on the Tucker rank. Since the Tucker rank is nonconvex and discontinuous, many relaxations of the Tucker rank have been proposed, e.g., the sum of nuclear norm, weighted tensor nuclear norm, and weighted tensor schatten-p norm. In particular, the weighted tensor schatten-p norm has two parameters, the weight and p, and the sum of nuclear norm and weighted tensor nuclear norm are special cases of these parameters. However, there has been no detailed discussion of whether the effects of the weighting and p are synergistic. In this paper, we propose a novel low-rank tensor completion model using the weighted tensor schatten-p norm to reveal the relationships between the weight and p. To clarify whether complex methods such as the weighted tensor schatten-p norm are necessary, we compare them with a simple method using rank-constrained minimization. It was found that the simple methods did not outperform the complex methods unless the rank of the original tensor could be accurately known. If we can obtain the ideal weight, p=1 is sufficient, although it is necessary to set p<1 when using the weights obtained from observations. These results are consistent with existing reports.
Style APA, Harvard, Vancouver, ISO itp.
49

DUAN, YI-SHI, i SHAO-FENG WU. "MAGNETIC BRANES FROM GENERALIZED 't HOOFT TENSOR". Modern Physics Letters A 21, nr 34 (10.11.2006): 2599–606. http://dx.doi.org/10.1142/s0217732306020500.

Pełny tekst źródła
Streszczenie:
't Hooft–Polykov magnetic monopole regularly realizes the Dirac magnetic monopole in terms of a two-rank tensor, the so-called 't Hooft tensor in 3D space. Based on the Chern kernel method, we propose the arbitrary rank 't Hooft tensors, which universally determine the quantized low energy boundaries of generalized Georgi–Glashow models under asymptotic conditions. Furthermore, the dual magnetic branes theory is built up in terms of ϕ-mapping theory.
Style APA, Harvard, Vancouver, ISO itp.
50

Zhou, Junxiu, Yangyang Tao i Xian Liu. "Tensor Decomposition for Salient Object Detection in Images". Big Data and Cognitive Computing 3, nr 2 (19.06.2019): 33. http://dx.doi.org/10.3390/bdcc3020033.

Pełny tekst źródła
Streszczenie:
The fundamental challenge of salient object detection is to find the decision boundary that separates the salient object from the background. Low-rank recovery models address this challenge by decomposing an image or image feature-based matrix into a low-rank matrix representing the image background and a sparse matrix representing salient objects. This method is simple and efficient in finding salient objects. However, it needs to convert high-dimensional feature space into a two-dimensional matrix. Therefore, it does not take full advantage of image features in discovering the salient object. In this article, we propose a tensor decomposition method which considers spatial consistency and tries to make full use of image feature information in detecting salient objects. First, we use high-dimensional image features in tensor to preserve spatial information about image features. Following this, we use a tensor low-rank and sparse model to decompose the image feature tensor into a low-rank tensor and a sparse tensor, where the low-rank tensor represents the background and the sparse tensor is used to identify the salient object. To solve the tensor low-rank and sparse model, we employed a heuristic strategy by relaxing the definition of tensor trace norm and tensor l1-norm. Experimental results on three saliency benchmarks demonstrate the effectiveness of the proposed tensor decomposition method.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii