Gotowa bibliografia na temat „Low-Rank Tensor”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Low-Rank Tensor”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Low-Rank Tensor"

1

Zhong, Guoqiang, i Mohamed Cheriet. "Large Margin Low Rank Tensor Analysis". Neural Computation 26, nr 4 (kwiecień 2014): 761–80. http://dx.doi.org/10.1162/neco_a_00570.

Pełny tekst źródła
Streszczenie:
We present a supervised model for tensor dimensionality reduction, which is called large margin low rank tensor analysis (LMLRTA). In contrast to traditional vector representation-based dimensionality reduction methods, LMLRTA can take any order of tensors as input. And unlike previous tensor dimensionality reduction methods, which can learn only the low-dimensional embeddings with a priori specified dimensionality, LMLRTA can automatically and jointly learn the dimensionality and the low-dimensional representations from data. Moreover, LMLRTA delivers low rank projection matrices, while it encourages data of the same class to be close and of different classes to be separated by a large margin of distance in the low-dimensional tensor space. LMLRTA can be optimized using an iterative fixed-point continuation algorithm, which is guaranteed to converge to a local optimal solution of the optimization problem. We evaluate LMLRTA on an object recognition application, where the data are represented as 2D tensors, and a face recognition application, where the data are represented as 3D tensors. Experimental results show the superiority of LMLRTA over state-of-the-art approaches.
Style APA, Harvard, Vancouver, ISO itp.
2

Liu, Hongyi, Hanyang Li, Zebin Wu i Zhihui Wei. "Hyperspectral Image Recovery Using Non-Convex Low-Rank Tensor Approximation". Remote Sensing 12, nr 14 (15.07.2020): 2264. http://dx.doi.org/10.3390/rs12142264.

Pełny tekst źródła
Streszczenie:
Low-rank tensors have received more attention in hyperspectral image (HSI) recovery. Minimizing the tensor nuclear norm, as a low-rank approximation method, often leads to modeling bias. To achieve an unbiased approximation and improve the robustness, this paper develops a non-convex relaxation approach for low-rank tensor approximation. Firstly, a non-convex approximation of tensor nuclear norm (NCTNN) is introduced to the low-rank tensor completion. Secondly, a non-convex tensor robust principal component analysis (NCTRPCA) method is proposed, which aims at exactly recovering a low-rank tensor corrupted by mixed-noise. The two proposed models are solved efficiently by the alternating direction method of multipliers (ADMM). Three HSI datasets are employed to exhibit the superiority of the proposed model over the low rank penalization method in terms of accuracy and robustness.
Style APA, Harvard, Vancouver, ISO itp.
3

Zhou, Pan, Canyi Lu, Zhouchen Lin i Chao Zhang. "Tensor Factorization for Low-Rank Tensor Completion". IEEE Transactions on Image Processing 27, nr 3 (marzec 2018): 1152–63. http://dx.doi.org/10.1109/tip.2017.2762595.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

He, Yicong, i George K. Atia. "Multi-Mode Tensor Space Clustering Based on Low-Tensor-Rank Representation". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 6 (28.06.2022): 6893–901. http://dx.doi.org/10.1609/aaai.v36i6.20646.

Pełny tekst źródła
Streszczenie:
Traditional subspace clustering aims to cluster data lying in a union of linear subspaces. The vectorization of high-dimensional data to 1-D vectors to perform clustering ignores much of the structure intrinsic to such data. To preserve said structure, in this work we exploit clustering in a high-order tensor space rather than a vector space. We develop a novel low-tensor-rank representation (LTRR) for unfolded matrices of tensor data lying in a low-rank tensor space. The representation coefficient matrix of an unfolding matrix is tensorized to a 3-order tensor, and the low-tensor-rank constraint is imposed on the transformed coefficient tensor to exploit the self-expressiveness property. Then, inspired by the multi-view clustering framework, we develop a multi-mode tensor space clustering algorithm (MMTSC) that can deal with tensor space clustering with or without missing entries. The tensor is unfolded along each mode, and the coefficient matrices are obtained for each unfolded matrix. The low tensor rank constraint is imposed on a tensor combined from transformed coefficient tensors of each mode, such that the proposed method can simultaneously capture the low rank property for the data within each tensor space and maintain cluster consistency across different modes. Experimental results demonstrate that the proposed MMTSC algorithm can outperform existing clustering algorithms in many cases.
Style APA, Harvard, Vancouver, ISO itp.
5

Liu, Xiaohua, i Guijin Tang. "Color Image Restoration Using Sub-Image Based Low-Rank Tensor Completion". Sensors 23, nr 3 (3.02.2023): 1706. http://dx.doi.org/10.3390/s23031706.

Pełny tekst źródła
Streszczenie:
Many restoration methods use the low-rank constraint of high-dimensional image signals to recover corrupted images. These signals are usually represented by tensors, which can maintain their inherent relevance. The image of this simple tensor presentation has a certain low-rank property, but does not have a strong low-rank property. In order to enhance the low-rank property, we propose a novel method called sub-image based low-rank tensor completion (SLRTC) for image restoration. We first sample a color image to obtain sub-images, and adopt these sub-images instead of the original single image to form a tensor. Then we conduct the mode permutation on this tensor. Next, we exploit the tensor nuclear norm defined based on the tensor-singular value decomposition (t-SVD) to build the low-rank completion model. Finally, we perform the tensor-singular value thresholding (t-SVT) based the standard alternating direction method of multipliers (ADMM) algorithm to solve the aforementioned model. Experimental results have shown that compared with the state-of-the-art tensor completion techniques, the proposed method can provide superior results in terms of objective and subjective assessment.
Style APA, Harvard, Vancouver, ISO itp.
6

Jiang, Yuanxiang, Qixiang Zhang, Zhanjiang Yuan i Chen Wang. "Convex Robust Recovery of Corrupted Tensors via Tensor Singular Value Decomposition and Local Low-Rank Approximation". Journal of Physics: Conference Series 2670, nr 1 (1.12.2023): 012026. http://dx.doi.org/10.1088/1742-6596/2670/1/012026.

Pełny tekst źródła
Streszczenie:
Abstract This paper discusses the recovery of tensor data corrupted by random noise. Our approach assumes that the potential structure of data is a linear combination of several low-rank tensor subspaces. The goal is to recover exactly these local low-rank tensors and remove random noise as much as possible. Non-parametric kernel smoothing technique is employed to establish an effective mathematical notion of local models. After that, each local model can be robustly separated into a low-rank tensor and a sparse tensor. The low-rank tensor can be recovered by minimizing a weighted combination of the norm and the tensor nuclear norm (TNN) obtained as the tightest convex relaxation of tensor multi-linear rank defined in Tensor Singular Value Decomposition (TSVD). Numerical simulation experiments verify that our proposed approach can effectively denoise the tensor data such as color images with random noise and has superior performance compared to the existing methods.
Style APA, Harvard, Vancouver, ISO itp.
7

Yu, Shicheng, Jiaqing Miao, Guibing Li, Weidong Jin, Gaoping Li i Xiaoguang Liu. "Tensor Completion via Smooth Rank Function Low-Rank Approximate Regularization". Remote Sensing 15, nr 15 (3.08.2023): 3862. http://dx.doi.org/10.3390/rs15153862.

Pełny tekst źródła
Streszczenie:
In recent years, the tensor completion algorithm has played a vital part in the reconstruction of missing elements within high-dimensional remote sensing image data. Due to the difficulty of tensor rank computation, scholars have proposed many substitutions of tensor rank. By introducing the smooth rank function (SRF), this paper proposes a new tensor rank nonconvex substitution function that performs adaptive weighting on different singular values to avoid the performance deficiency caused by the equal treatment of all singular values. On this basis, a novel tensor completion model that minimizes the SRF as the objective function is proposed. The proposed model is efficiently solved by adding the hot start method to the alternating direction multiplier method (ADMM) framework. Extensive experiments are carried out in this paper to demonstrate the resilience of the proposed model to missing data. The results illustrate that the proposed model is superior to other advanced models in tensor completeness.
Style APA, Harvard, Vancouver, ISO itp.
8

Nie, Jiawang. "Low Rank Symmetric Tensor Approximations". SIAM Journal on Matrix Analysis and Applications 38, nr 4 (styczeń 2017): 1517–40. http://dx.doi.org/10.1137/16m1107528.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Mickelin, Oscar, i Sertac Karaman. "Multiresolution Low-rank Tensor Formats". SIAM Journal on Matrix Analysis and Applications 41, nr 3 (styczeń 2020): 1086–114. http://dx.doi.org/10.1137/19m1284579.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Gong, Xiao, Wei Chen, Jie Chen i Bo Ai. "Tensor Denoising Using Low-Rank Tensor Train Decomposition". IEEE Signal Processing Letters 27 (2020): 1685–89. http://dx.doi.org/10.1109/lsp.2020.3025038.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Low-Rank Tensor"

1

Stojanac, Željka [Verfasser]. "Low-rank Tensor Recovery / Željka Stojanac". Bonn : Universitäts- und Landesbibliothek Bonn, 2016. http://d-nb.info/1119888565/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Shi, Qiquan. "Low rank tensor decomposition for feature extraction and tensor recovery". HKBU Institutional Repository, 2018. https://repository.hkbu.edu.hk/etd_oa/549.

Pełny tekst źródła
Streszczenie:
Feature extraction and tensor recovery problems are important yet challenging, particularly for multi-dimensional data with missing values and/or noise. Low-rank tensor decomposition approaches are widely used for solving these problems. This thesis focuses on three common tensor decompositions (CP, Tucker and t-SVD) and develops a set of decomposition-based approaches. The proposed methods aim to extract low-dimensional features from complete/incomplete data and recover tensors given partial and/or grossly corrupted observations.;Based on CP decomposition, semi-orthogonal multilinear principal component analysis (SO-MPCA) seeks a tensor-to-vector projection that maximizes the captured variance with the orthogonality constraint imposed in only one mode, and it further integrates the relaxed start strategy (SO-MPCA-RS) to achieve better feature extraction performance. To directly obtain the features from incomplete data, low-rank CP and Tucker decomposition with feature variance maximization (TDVM-CP and TDVM-Tucker) are proposed. TDVM methods explore the relationship among tensor samples via feature variance maximization, while estimating the missing entries via low-rank CP and Tucker approximation, leading to informative features extracted directly from partial observations. TDVM-CP extracts low-dimensional vector features viewing the weight vectors as features and TDVM-Tucker learns low-dimensional tensor features viewing the core tensors as features. TDVM methods can be generalized to other variants based on other tensor decompositions. On the other hand, this thesis solves the missing data problem by introducing low-rank matrix/tensor completion methods, and also contributes to automatic rank estimation. Rank-one matrix decomposition coupled with L1-norm regularization (L1MC) addresses the matrix rank estimation problem. With the correct estimated rank, L1MC refines its model without L1-norm regularization (L1MC-RF) and achieve optimal recovery results given enough observations. In addition, CP-based nuclear norm regularized orthogonal CP decomposition (TREL1) solves the challenging CP- and Tucker-rank estimation problems. The estimated rank can improve the tensor completion accuracy of existing decomposition-based methods. Furthermore, tensor singular value decomposition (t-SVD) combined with tensor nuclear norm (TNN) regularization (ARE_TNN) provides automatic tubal-rank estimation. With the accurate tubal-rank determination, ARE_TNN relaxes its model without the TNN constraint (TC-ARE) and results in optimal tensor completion under mild conditions. In addition, ARE_TNN refines its model by explicitly utilizing its determined tubal-rank a priori and then successfully recovers low-rank tensors based on incomplete and/or grossly corrupted observations (RTC-ARE: robust tensor completion/RTPCA-ARE: robust tensor principal component analysis).;Experiments and evaluations are presented and analyzed using synthetic data and real-world images/videos in machine learning, computer vision, and data mining applications. For feature extraction, the experimental results of face and gait recognition show that SO-MPCA-RS achieves the best overall performance compared with competing algorithms, and its relaxed start strategy is also effective for other CP-based PCA methods. In the applications of face recognition, object/action classification, and face/gait clustering, TDVM methods not only stably yield similar good results under various multi-block missing settings and different parameters in general, but also outperform the competing methods with significant improvements. For matrix/tensor rank estimation and recovery, L1MC-RF efficiently estimates the true rank and exactly recovers the incomplete images/videos under mild conditions, and outperforms the state-of-the-art algorithms on the whole. Furthermore, the empirical evaluations show that TREL1 correctly determines the CP-/Tucker- ranks well, given sufficient observed entries, which consistently improves the recovery performance of existing decomposition-based tensor completion. The t-SVD recovery methods TC-ARE, RTPCA-ARE, and RTC-ARE not only inherit the ability of ARE_TNN to achieve accurate rank estimation, but also achieve good performance in the tasks of (robust) image/video completion, video denoising, and background modeling. This outperforms the state-of-the-art methods in all cases we have tried so far with significant improvements.
Style APA, Harvard, Vancouver, ISO itp.
3

Han, Xu. "Robust low-rank tensor approximations using group sparsity". Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S001/document.

Pełny tekst źródła
Streszczenie:
Le développement de méthodes de décomposition de tableaux multi-dimensionnels suscite toujours autant d'attention, notamment d'un point de vue applicatif. La plupart des algorithmes, de décompositions tensorielles, existants requièrent une estimation du rang du tenseur et sont sensibles à une surestimation de ce dernier. Toutefois, une telle estimation peut être difficile par exemple pour des rapports signal à bruit faibles. D'un autre côté, estimer simultanément le rang et les matrices de facteurs du tenseur ou du tenseur cœur n'est pas tâche facile tant les problèmes de minimisation de rang sont généralement NP-difficiles. Plusieurs travaux existants proposent d'utiliser la norme nucléaire afin de servir d'enveloppe convexe de la fonction de rang. Cependant, la minimisation de la norme nucléaire engendre généralement un coût de calcul prohibitif pour l'analyse de données de grande taille. Dans cette thèse, nous nous sommes donc intéressés à l'approximation d'un tenseur bruité par un tenseur de rang faible. Plus précisément, nous avons étudié trois modèles de décomposition tensorielle, le modèle CPD (Canonical Polyadic Decomposition), le modèle BTD (Block Term Decomposition) et le modèle MTD (Multilinear Tensor Decomposition). Pour chacun de ces modèles, nous avons proposé une nouvelle méthode d'estimation de rang utilisant une métrique moins coûteuse exploitant la parcimonie de groupe. Ces méthodes de décomposition comportent toutes deux étapes : une étape d'estimation de rang, et une étape d'estimation des matrices de facteurs exploitant le rang estimé. Des simulations sur données simulées et sur données réelles montrent que nos méthodes présentent toutes une plus grande robustesse à la présence de bruit que les approches classiques
Last decades, tensor decompositions have gained in popularity in several application domains. Most of the existing tensor decomposition methods require an estimating of the tensor rank in a preprocessing step to guarantee an outstanding decomposition results. Unfortunately, learning the exact rank of the tensor can be difficult in some particular cases, such as for low signal to noise ratio values. The objective of this thesis is to compute the best low-rank tensor approximation by a joint estimation of the rank and the loading matrices from the noisy tensor. Based on the low-rank property and an over estimation of the loading matrices or the core tensor, this joint estimation problem is solved by promoting group sparsity of over-estimated loading matrices and/or the core tensor. More particularly, three new methods are proposed to achieve efficient low rank estimation for three different tensors decomposition models, namely Canonical Polyadic Decomposition (CPD), Block Term Decomposition (BTD) and Multilinear Tensor Decomposition (MTD). All the proposed methods consist of two steps: the first step is designed to estimate the rank, and the second step uses the estimated rank to compute accurately the loading matrices. Numerical simulations with noisy tensor and results on real data the show effectiveness of the proposed methods compared to the state-of-the-art methods
Style APA, Harvard, Vancouver, ISO itp.
4

Benedikt, Udo. "Low-Rank Tensor Approximation in post Hartree-Fock Methods". Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-133194.

Pełny tekst źródła
Streszczenie:
In this thesis the application of novel tensor decomposition and tensor representation techniques in highly accurate post Hartree-Fock methods is evaluated. These representation techniques can help to overcome the steep scaling behaviour of high level ab-initio calculations with increasing system size and therefore break the "curse of dimensionality". After a comparison of various tensor formats the application of the "canonical polyadic" format (CP) is described in detail. There, especially the casting of a normal, index based tensor into the CP format (tensor decomposition) and a method for a low rank approximation (rank reduction) of the two-electron integrals in the AO basis are investigated. The decisive quantity for the applicability of the CP format is the scaling of the rank with increasing system and basis set size. The memory requirements and the computational effort for tensor manipulations in the CP format are only linear in the number of dimensions but still depend on the expansion length (rank) of the approximation. Furthermore, the AO-MO transformation and a MP2 algorithm with decomposed tensors in the CP format is evaluated and the scaling with increasing system and basis set size is investigated. Finally, a Coupled-Cluster algorithm based only on low-rank CP representation of the MO integrals is developed. There, especially the successive tensor contraction during the iterative solution of the amplitude equations and the error propagation upon multiple application of the reduction procedure are discussed. In conclusion the overall complexity of a Coupled-Cluster procedure with tensors in CP format is evaluated and some possibilities for improvements of the rank reduction procedure tailored to the needs in electronic structure calculations are shown
Die vorliegende Arbeit beschäftigt sich mit der Anwendung neuartiger Tensorzerlegungs- und Tensorrepesentationstechniken in hochgenauen post Hartree-Fock Methoden um das hohe Skalierungsverhalten dieser Verfahren mit steigender Systemgröße zu verringern und somit den "Fluch der Dimensionen" zu brechen. Nach einer vergleichenden Betrachtung verschiedener Representationsformate wird auf die Anwendung des "canonical polyadic" Formates (CP) detailliert eingegangen. Dabei stehen zunächst die Umwandlung eines normalen, indexbasierten Tensors in das CP Format (Tensorzerlegung) und eine Methode der Niedrigrang Approximation (Rangreduktion) für Zweielektronenintegrale in der AO Basis im Vordergrund. Die entscheidende Größe für die Anwendbarkeit ist dabei das Skalierungsverhalten das Ranges mit steigender System- und Basissatzgröße, da der Speicheraufwand und die Berechnungskosten für Tensormanipulationen im CP Format zwar nur noch linear von der Anzahl der Dimensionen des Tensors abhängen, allerdings auch mit der Expansionslänge (Rang) skalieren. Im Anschluss wird die AO-MO Transformation und der MP2 Algorithmus mit zerlegten Tensoren im CP Format diskutiert und erneut das Skalierungsverhalten mit steigender System- und Basissatzgröße untersucht. Abschließend wird ein Coupled-Cluster Algorithmus vorgestellt, welcher ausschließlich mit Tensoren in einer Niedrigrang CP Darstellung arbeitet. Dabei wird vor allem auf die sukzessive Tensorkontraktion während der iterativen Bestimmung der Amplituden eingegangen und die Fehlerfortpanzung durch Anwendung des Rangreduktions-Algorithmus analysiert. Abschließend wird die Komplexität des gesamten Verfahrens bewertet und Verbesserungsmöglichkeiten der Reduktionsprozedur aufgezeigt
Style APA, Harvard, Vancouver, ISO itp.
5

Rabusseau, Guillaume. "A tensor perspective on weighted automata, low-rank regression and algebraic mixtures". Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM4062.

Pełny tekst źródła
Streszczenie:
Ce manuscrit regroupe différents travaux explorant les interactions entre les tenseurs et l'apprentissage automatique. Le premier chapitre est consacré à l'extension des modèles de séries reconnaissables de chaînes et d'arbres aux graphes. Nous y montrons que les modèles d'automates pondérés de chaînes et d'arbres peuvent être interprétés d'une manière simple et unifiée à l'aide de réseaux de tenseurs, et que cette interprétation s'étend naturellement aux graphes ; nous étudions certaines propriétés de ce modèle et présentons des résultats préliminaires sur leur apprentissage. Le second chapitre porte sur la minimisation approximée d'automates pondérés d'arbres et propose une approche théoriquement fondée à la problématique suivante : étant donné un automate pondéré d'arbres à n états, comment trouver un automate à m
This thesis tackles several problems exploring connections between tensors and machine learning. In the first chapter, we propose an extension of the classical notion of recognizable function on strings and trees to graphs. We first show that the computations of weighted automata on strings and trees can be interpreted in a natural and unifying way using tensor networks, which naturally leads us to define a computational model on graphs: graph weighted models; we then study fundamental properties of this model and present preliminary learning results. The second chapter tackles a model reduction problem for weighted tree automata. We propose a principled approach to the following problem: given a weighted tree automaton with n states, how can we find an automaton with m
Style APA, Harvard, Vancouver, ISO itp.
6

Alora, John Irvin P. "Automated synthesis of low-rank stochastic dynamical systems using the tensor-train decomposition". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105006.

Pełny tekst źródła
Streszczenie:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 79-83).
Cyber-physical systems are increasingly becoming integrated in various fields such as medicine, finance, robotics, and energy. In these systems and their applications, safety and correctness of operation is of primary concern, sparking a large amount of interest in the development of ways to verify system behavior. The tight coupling of physical constraints and computation that typically characterize cyber-physical systems make them extremely complex, resulting in unexpected failure modes. Furthermore, disturbances in the environment and uncertainties in the physical model require these systems to be robust. These are difficult constraints, requiring cyberphysical systems to be able to reason about their behavior and respond to events in real-time. Thus, the goal of automated synthesis is to construct a controller that provably implements a range of behaviors given by a specification of how the system should operate. Unfortunately, many approaches to automated synthesis are ad hoc and are limited to simple systems that admit specific structure (e.g. linear, affine systems). Not only that, but they are also designed without taking into account uncertainty. In order to tackle more general problems, several computational frameworks that allow for more general dynamics and uncertainty to be investigated. Furthermore, all of the existing computational algorithms suffer from the curse of dimensionality, the run time scales exponentially with increasing dimensionality of the state space. As a result, existing algorithms apply to systems with only a few degrees of freedom. In this thesis, we consider a stochastic optimal control problem with a special class of linear temporal logic specifications and propose a novel algorithm based on the tensor-train decomposition. We prove that the run time of the proposed algorithm scales linearly with the dimensionality of the state space and polynomially with the rank of the optimal cost-to-go function.
by John Irvin P. Alora.
S.M.
Style APA, Harvard, Vancouver, ISO itp.
7

Ceruti, Gianluca [Verfasser]. "Unconventional contributions to dynamical low-rank approximation of tree tensor networks / Gianluca Ceruti". Tübingen : Universitätsbibliothek Tübingen, 2021. http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-1186805.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Gorodetsky, Alex Arkady. "Continuous low-rank tensor decompositions, with applications to stochastic optimal control and data assimilation". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108918.

Pełny tekst źródła
Streszczenie:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 205-214).
Optimal decision making under uncertainty is critical for control and optimization of complex systems. However, many techniques for solving problems such as stochastic optimal control and data assimilation encounter the curse of dimensionality when too many state variables are involved. In this thesis, we propose a framework for computing with high-dimensional functions that mitigates this exponential growth in complexity for problems with separable structure. Our framework tightly integrates two emerging areas: tensor decompositions and continuous computation. Tensor decompositions are able to effectively compress and operate with low-rank multidimensional arrays. Continuous computation is a paradigm for computing with functions instead of arrays, and it is best realized by Chebfun, a MATLAB package for computing with functions of up to three dimensions. Continuous computation provides a natural framework for building numerical algorithms that effectively, naturally, and automatically adapt to problem structure. The first part of this thesis describes a compressed continuous computation framework centered around a continuous analogue to the (discrete) tensor-train decomposition called the function-train decomposition. Computation with the function-train requires continuous matrix factorizations and continuous numerical linear algebra. Continuous analogues are presented for performing cross approximation; rounding; multilinear algebra operations such as addition, multiplication, integration, and differentiation; and continuous, rank-revealing, alternating least squares. Advantages of the function-train over the tensor-train include the ability to adaptively approximate functions and the ability to compute with functions that are parameterized differently. For example, while elementwise multiplication between tensors of different sizes is undefined, functions in FT format can be readily multiplied together. Next, we develop compressed versions of value iteration, policy iteration, and multilevel algorithms for solving dynamic programming problems arising in stochastic optimal control. These techniques enable computing global solutions to a broader set of problems, for example those with non-affine control inputs, than previously possible. Examples are presented for motion planning with robotic systems that have up to seven states. Finally, we use the FT to extend integration-based Gaussian filtering to larger state spaces than previously considered. Examples are presented for dynamical systems with up to twenty states.
by Alex Arkady Gorodetsky.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
9

Benedikt, Udo [Verfasser], Alexander A. [Akademischer Betreuer] Auer i Sibylle [Gutachter] Gemming. "Low-Rank Tensor Approximation in post Hartree-Fock Methods / Udo Benedikt ; Gutachter: Sibylle Gemming ; Betreuer: Alexander A. Auer". Chemnitz : Universitätsbibliothek Chemnitz, 2014. http://d-nb.info/1230577440/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Cordolino, Sobral Andrews. "Robust low-rank and sparse decomposition for moving object detection : from matrices to tensors". Thesis, La Rochelle, 2017. http://www.theses.fr/2017LAROS007/document.

Pełny tekst źródła
Streszczenie:
Dans ce manuscrit de thèse, nous introduisons les avancées récentes sur la décomposition en matrices (et tenseurs) de rang faible et parcimonieuse ainsi que les contributions pour faire face aux principaux problèmes dans ce domaine. Nous présentons d’abord un aperçu des méthodes matricielles et tensorielles les plus récentes ainsi que ses applications sur la modélisation d’arrière-plan et la segmentation du premier plan. Ensuite, nous abordons le problème de l’initialisation du modèle de fond comme un processus de reconstruction à partir de données manquantes ou corrompues. Une nouvelle méthodologie est présentée montrant un potentiel intéressant pour l’initialisation de la modélisation du fond dans le cadre de VSI. Par la suite, nous proposons une version « double contrainte » de l’ACP robuste pour améliorer la détection de premier plan en milieu marin dans des applications de vidéo-surveillance automatisées. Nous avons aussi développé deux algorithmes incrémentaux basés sur tenseurs afin d’effectuer une séparation entre le fond et le premier plan à partir de données multidimensionnelles. Ces deux travaux abordent le problème de la décomposition de rang faible et parcimonieuse sur des tenseurs. A la fin, nous présentons un travail particulier réalisé en conjonction avec le Centre de Vision Informatique (CVC) de l’Université Autonome de Barcelone (UAB)
This thesis introduces the recent advances on decomposition into low-rank plus sparse matrices and tensors, as well as the main contributions to face the principal issues in moving object detection. First, we present an overview of the state-of-the-art methods for low-rank and sparse decomposition, as well as their application to background modeling and foreground segmentation tasks. Next, we address the problem of background model initialization as a reconstruction process from missing/corrupted data. A novel methodology is presented showing an attractive potential for background modeling initialization in video surveillance. Subsequently, we propose a double-constrained version of robust principal component analysis to improve the foreground detection in maritime environments for automated video-surveillance applications. The algorithm makes use of double constraints extracted from spatial saliency maps to enhance object foreground detection in dynamic scenes. We also developed two incremental tensor-based algorithms in order to perform background/foreground separation from multidimensional streaming data. These works address the problem of low-rank and sparse decomposition on tensors. Finally, we present a particular work realized in conjunction with the Computer Vision Center (CVC) at Autonomous University of Barcelona (UAB)
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Low-Rank Tensor"

1

Ashraphijuo, Morteza. Low-Rank Tensor Completion - Fundamental Limits and Efficient Algorithms. [New York, N.Y.?]: [publisher not identified], 2020.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Lee, Namgil, Anh-Huy Phan, Danilo P. Mandic, Andrzej Cichocki i Ivan Oseledets. Tensor Networks for Dimensionality Reduction and Large-Scale Optimization: Part 1 Low-Rank Tensor Decompositions. Now Publishers, 2016.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Low-Rank Tensor"

1

Liu, Yipeng, Jiani Liu, Zhen Long i Ce Zhu. "Low-Rank Tensor Recovery". W Tensor Computation for Data Analysis, 93–114. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74386-4_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Zhong, Guoqiang, i Mohamed Cheriet. "Low Rank Tensor Manifold Learning". W Low-Rank and Sparse Modeling for Visual Analysis, 133–50. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12000-3_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Song, Zhao, David P. Woodruff i Peilin Zhong. "Relative Error Tensor Low Rank Approximation". W Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, 2772–89. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2019. http://dx.doi.org/10.1137/1.9781611975482.172.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Chen, Wanli, Xinge Zhu, Ruoqi Sun, Junjun He, Ruiyu Li, Xiaoyong Shen i Bei Yu. "Tensor Low-Rank Reconstruction for Semantic Segmentation". W Computer Vision – ECCV 2020, 52–69. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58520-4_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Harmouch, Jouhayna, Bernard Mourrain i Houssam Khalil. "Decomposition of Low Rank Multi-symmetric Tensor". W Mathematical Aspects of Computer and Information Sciences, 51–66. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-72453-9_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Grasedyck, Lars, i Christian Löbbert. "Parallel Algorithms for Low Rank Tensor Arithmetic". W Advances in Mechanics and Mathematics, 271–82. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-02487-1_16.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Nouy, Anthony. "Low-Rank Tensor Methods for Model Order Reduction". W Handbook of Uncertainty Quantification, 857–82. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-12385-1_21.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Kressner, Daniel, i Francisco Macedo. "Low-Rank Tensor Methods for Communicating Markov Processes". W Quantitative Evaluation of Systems, 25–40. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10696-0_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Nouy, Anthony. "Low-Rank Tensor Methods for Model Order Reduction". W Handbook of Uncertainty Quantification, 1–26. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-11259-6_21-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Purohit, Antra, Abhishek, Rakesh i Shekhar Verma. "Optimal Low Rank Tensor Factorization for Deep Learning". W Communications in Computer and Information Science, 476–84. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-2372-0_42.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Low-Rank Tensor"

1

Javed, Sajid, Jorge Dias i Naoufel Werghi. "Low-Rank Tensor Tracking". W 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, 2019. http://dx.doi.org/10.1109/iccvw.2019.00074.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Phan, Anh-Huy, Petr Tichavsky i Andrzej Cichocki. "Low rank tensor deconvolution". W ICASSP 2015 - 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015. http://dx.doi.org/10.1109/icassp.2015.7178355.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Shi, Yuqing, Shiqiang Du i Weilan Wang. "Robust Low-Rank and Sparse Tensor Decomposition for Low-Rank Tensor Completion". W 2021 33rd Chinese Control and Decision Conference (CCDC). IEEE, 2021. http://dx.doi.org/10.1109/ccdc52312.2021.9601608.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Wang, Zhanliang, Junyu Dong, Xinguo Liu i Xueying Zeng. "Low-Rank Tensor Completion by Approximating the Tensor Average Rank". W 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.00457.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Bazerque, Juan Andres, Gonzalo Mateos i Georgios B. Giannakis. "Nonparametric low-rank tensor imputation". W 2012 IEEE Statistical Signal Processing Workshop (SSP). IEEE, 2012. http://dx.doi.org/10.1109/ssp.2012.6319847.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Ribeiro, Lucas N., Andre L. F. de Almeida i Joao C. M. Mota. "Low-Rank Tensor MMSE Equalization". W 2019 16th International Symposium on Wireless Communication Systems (ISWCS). IEEE, 2019. http://dx.doi.org/10.1109/iswcs.2019.8877123.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Liu, Han, Jing Liu i Liyu Su. "Adaptive Rank Estimation Based Tensor Factorization Algorithm for Low-Rank Tensor Completion". W 2019 Chinese Control Conference (CCC). IEEE, 2019. http://dx.doi.org/10.23919/chicc.2019.8865482.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Haselby, Cullen, Santhosh Karnik i Mark Iwen. "Tensor Sandwich: Tensor Completion for Low CP-Rank Tensors via Adaptive Random Sampling". W 2023 International Conference on Sampling Theory and Applications (SampTA). IEEE, 2023. http://dx.doi.org/10.1109/sampta59647.2023.10301204.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Wang, Wenqi, Vaneet Aggarwal i Shuchin Aeron. "Efficient Low Rank Tensor Ring Completion". W 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, 2017. http://dx.doi.org/10.1109/iccv.2017.607.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Li, Ping, Jiashi Feng, Xiaojie Jin, Luming Zhang, Xianghua Xu i Shuicheng Yan. "Online Robust Low-Rank Tensor Learning". W Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/303.

Pełny tekst źródła
Streszczenie:
The rapid increase of multidimensional data (a.k.a. tensor) like videos brings new challenges for low-rank data modeling approaches such as dynamic data size, complex high-order relations, and multiplicity of low-rank structures. Resolving these challenges require a new tensor analysis method that can perform tensor data analysis online, which however is still absent. In this paper, we propose an Online Robust Low-rank Tensor Modeling (ORLTM) approach to address these challenges. ORLTM dynamically explores the high-order correlations across all tensor modes for low-rank structure modeling. To analyze mixture data from multiple subspaces, ORLTM introduces a new dictionary learning component. ORLTM processes data streamingly and thus requires quite low memory cost that is independent of data size. This makes ORLTM quite suitable for processing large-scale tensor data. Empirical studies have validated the effectiveness of the proposed method on both synthetic data and one practical task, i.e., video background subtraction. In addition, we provide theoretical analysis regarding computational complexity and memory cost, demonstrating the efficiency of ORLTM rigorously.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii