Littérature scientifique sur le sujet « Low-Rank Tensor »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Low-Rank Tensor ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Low-Rank Tensor"
Zhong, Guoqiang, et Mohamed Cheriet. « Large Margin Low Rank Tensor Analysis ». Neural Computation 26, no 4 (avril 2014) : 761–80. http://dx.doi.org/10.1162/neco_a_00570.
Texte intégralLiu, Hongyi, Hanyang Li, Zebin Wu et Zhihui Wei. « Hyperspectral Image Recovery Using Non-Convex Low-Rank Tensor Approximation ». Remote Sensing 12, no 14 (15 juillet 2020) : 2264. http://dx.doi.org/10.3390/rs12142264.
Texte intégralZhou, Pan, Canyi Lu, Zhouchen Lin et Chao Zhang. « Tensor Factorization for Low-Rank Tensor Completion ». IEEE Transactions on Image Processing 27, no 3 (mars 2018) : 1152–63. http://dx.doi.org/10.1109/tip.2017.2762595.
Texte intégralHe, Yicong, et George K. Atia. « Multi-Mode Tensor Space Clustering Based on Low-Tensor-Rank Representation ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 6 (28 juin 2022) : 6893–901. http://dx.doi.org/10.1609/aaai.v36i6.20646.
Texte intégralLiu, Xiaohua, et Guijin Tang. « Color Image Restoration Using Sub-Image Based Low-Rank Tensor Completion ». Sensors 23, no 3 (3 février 2023) : 1706. http://dx.doi.org/10.3390/s23031706.
Texte intégralJiang, Yuanxiang, Qixiang Zhang, Zhanjiang Yuan et Chen Wang. « Convex Robust Recovery of Corrupted Tensors via Tensor Singular Value Decomposition and Local Low-Rank Approximation ». Journal of Physics : Conference Series 2670, no 1 (1 décembre 2023) : 012026. http://dx.doi.org/10.1088/1742-6596/2670/1/012026.
Texte intégralYu, Shicheng, Jiaqing Miao, Guibing Li, Weidong Jin, Gaoping Li et Xiaoguang Liu. « Tensor Completion via Smooth Rank Function Low-Rank Approximate Regularization ». Remote Sensing 15, no 15 (3 août 2023) : 3862. http://dx.doi.org/10.3390/rs15153862.
Texte intégralNie, Jiawang. « Low Rank Symmetric Tensor Approximations ». SIAM Journal on Matrix Analysis and Applications 38, no 4 (janvier 2017) : 1517–40. http://dx.doi.org/10.1137/16m1107528.
Texte intégralMickelin, Oscar, et Sertac Karaman. « Multiresolution Low-rank Tensor Formats ». SIAM Journal on Matrix Analysis and Applications 41, no 3 (janvier 2020) : 1086–114. http://dx.doi.org/10.1137/19m1284579.
Texte intégralGong, Xiao, Wei Chen, Jie Chen et Bo Ai. « Tensor Denoising Using Low-Rank Tensor Train Decomposition ». IEEE Signal Processing Letters 27 (2020) : 1685–89. http://dx.doi.org/10.1109/lsp.2020.3025038.
Texte intégralThèses sur le sujet "Low-Rank Tensor"
Stojanac, Željka [Verfasser]. « Low-rank Tensor Recovery / Željka Stojanac ». Bonn : Universitäts- und Landesbibliothek Bonn, 2016. http://d-nb.info/1119888565/34.
Texte intégralShi, Qiquan. « Low rank tensor decomposition for feature extraction and tensor recovery ». HKBU Institutional Repository, 2018. https://repository.hkbu.edu.hk/etd_oa/549.
Texte intégralHan, Xu. « Robust low-rank tensor approximations using group sparsity ». Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S001/document.
Texte intégralLast decades, tensor decompositions have gained in popularity in several application domains. Most of the existing tensor decomposition methods require an estimating of the tensor rank in a preprocessing step to guarantee an outstanding decomposition results. Unfortunately, learning the exact rank of the tensor can be difficult in some particular cases, such as for low signal to noise ratio values. The objective of this thesis is to compute the best low-rank tensor approximation by a joint estimation of the rank and the loading matrices from the noisy tensor. Based on the low-rank property and an over estimation of the loading matrices or the core tensor, this joint estimation problem is solved by promoting group sparsity of over-estimated loading matrices and/or the core tensor. More particularly, three new methods are proposed to achieve efficient low rank estimation for three different tensors decomposition models, namely Canonical Polyadic Decomposition (CPD), Block Term Decomposition (BTD) and Multilinear Tensor Decomposition (MTD). All the proposed methods consist of two steps: the first step is designed to estimate the rank, and the second step uses the estimated rank to compute accurately the loading matrices. Numerical simulations with noisy tensor and results on real data the show effectiveness of the proposed methods compared to the state-of-the-art methods
Benedikt, Udo. « Low-Rank Tensor Approximation in post Hartree-Fock Methods ». Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-133194.
Texte intégralDie vorliegende Arbeit beschäftigt sich mit der Anwendung neuartiger Tensorzerlegungs- und Tensorrepesentationstechniken in hochgenauen post Hartree-Fock Methoden um das hohe Skalierungsverhalten dieser Verfahren mit steigender Systemgröße zu verringern und somit den "Fluch der Dimensionen" zu brechen. Nach einer vergleichenden Betrachtung verschiedener Representationsformate wird auf die Anwendung des "canonical polyadic" Formates (CP) detailliert eingegangen. Dabei stehen zunächst die Umwandlung eines normalen, indexbasierten Tensors in das CP Format (Tensorzerlegung) und eine Methode der Niedrigrang Approximation (Rangreduktion) für Zweielektronenintegrale in der AO Basis im Vordergrund. Die entscheidende Größe für die Anwendbarkeit ist dabei das Skalierungsverhalten das Ranges mit steigender System- und Basissatzgröße, da der Speicheraufwand und die Berechnungskosten für Tensormanipulationen im CP Format zwar nur noch linear von der Anzahl der Dimensionen des Tensors abhängen, allerdings auch mit der Expansionslänge (Rang) skalieren. Im Anschluss wird die AO-MO Transformation und der MP2 Algorithmus mit zerlegten Tensoren im CP Format diskutiert und erneut das Skalierungsverhalten mit steigender System- und Basissatzgröße untersucht. Abschließend wird ein Coupled-Cluster Algorithmus vorgestellt, welcher ausschließlich mit Tensoren in einer Niedrigrang CP Darstellung arbeitet. Dabei wird vor allem auf die sukzessive Tensorkontraktion während der iterativen Bestimmung der Amplituden eingegangen und die Fehlerfortpanzung durch Anwendung des Rangreduktions-Algorithmus analysiert. Abschließend wird die Komplexität des gesamten Verfahrens bewertet und Verbesserungsmöglichkeiten der Reduktionsprozedur aufgezeigt
Rabusseau, Guillaume. « A tensor perspective on weighted automata, low-rank regression and algebraic mixtures ». Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM4062.
Texte intégralThis thesis tackles several problems exploring connections between tensors and machine learning. In the first chapter, we propose an extension of the classical notion of recognizable function on strings and trees to graphs. We first show that the computations of weighted automata on strings and trees can be interpreted in a natural and unifying way using tensor networks, which naturally leads us to define a computational model on graphs: graph weighted models; we then study fundamental properties of this model and present preliminary learning results. The second chapter tackles a model reduction problem for weighted tree automata. We propose a principled approach to the following problem: given a weighted tree automaton with n states, how can we find an automaton with m
Alora, John Irvin P. « Automated synthesis of low-rank stochastic dynamical systems using the tensor-train decomposition ». Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105006.
Texte intégralThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 79-83).
Cyber-physical systems are increasingly becoming integrated in various fields such as medicine, finance, robotics, and energy. In these systems and their applications, safety and correctness of operation is of primary concern, sparking a large amount of interest in the development of ways to verify system behavior. The tight coupling of physical constraints and computation that typically characterize cyber-physical systems make them extremely complex, resulting in unexpected failure modes. Furthermore, disturbances in the environment and uncertainties in the physical model require these systems to be robust. These are difficult constraints, requiring cyberphysical systems to be able to reason about their behavior and respond to events in real-time. Thus, the goal of automated synthesis is to construct a controller that provably implements a range of behaviors given by a specification of how the system should operate. Unfortunately, many approaches to automated synthesis are ad hoc and are limited to simple systems that admit specific structure (e.g. linear, affine systems). Not only that, but they are also designed without taking into account uncertainty. In order to tackle more general problems, several computational frameworks that allow for more general dynamics and uncertainty to be investigated. Furthermore, all of the existing computational algorithms suffer from the curse of dimensionality, the run time scales exponentially with increasing dimensionality of the state space. As a result, existing algorithms apply to systems with only a few degrees of freedom. In this thesis, we consider a stochastic optimal control problem with a special class of linear temporal logic specifications and propose a novel algorithm based on the tensor-train decomposition. We prove that the run time of the proposed algorithm scales linearly with the dimensionality of the state space and polynomially with the rank of the optimal cost-to-go function.
by John Irvin P. Alora.
S.M.
Ceruti, Gianluca [Verfasser]. « Unconventional contributions to dynamical low-rank approximation of tree tensor networks / Gianluca Ceruti ». Tübingen : Universitätsbibliothek Tübingen, 2021. http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-1186805.
Texte intégralGorodetsky, Alex Arkady. « Continuous low-rank tensor decompositions, with applications to stochastic optimal control and data assimilation ». Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108918.
Texte intégralCataloged from PDF version of thesis.
Includes bibliographical references (pages 205-214).
Optimal decision making under uncertainty is critical for control and optimization of complex systems. However, many techniques for solving problems such as stochastic optimal control and data assimilation encounter the curse of dimensionality when too many state variables are involved. In this thesis, we propose a framework for computing with high-dimensional functions that mitigates this exponential growth in complexity for problems with separable structure. Our framework tightly integrates two emerging areas: tensor decompositions and continuous computation. Tensor decompositions are able to effectively compress and operate with low-rank multidimensional arrays. Continuous computation is a paradigm for computing with functions instead of arrays, and it is best realized by Chebfun, a MATLAB package for computing with functions of up to three dimensions. Continuous computation provides a natural framework for building numerical algorithms that effectively, naturally, and automatically adapt to problem structure. The first part of this thesis describes a compressed continuous computation framework centered around a continuous analogue to the (discrete) tensor-train decomposition called the function-train decomposition. Computation with the function-train requires continuous matrix factorizations and continuous numerical linear algebra. Continuous analogues are presented for performing cross approximation; rounding; multilinear algebra operations such as addition, multiplication, integration, and differentiation; and continuous, rank-revealing, alternating least squares. Advantages of the function-train over the tensor-train include the ability to adaptively approximate functions and the ability to compute with functions that are parameterized differently. For example, while elementwise multiplication between tensors of different sizes is undefined, functions in FT format can be readily multiplied together. Next, we develop compressed versions of value iteration, policy iteration, and multilevel algorithms for solving dynamic programming problems arising in stochastic optimal control. These techniques enable computing global solutions to a broader set of problems, for example those with non-affine control inputs, than previously possible. Examples are presented for motion planning with robotic systems that have up to seven states. Finally, we use the FT to extend integration-based Gaussian filtering to larger state spaces than previously considered. Examples are presented for dynamical systems with up to twenty states.
by Alex Arkady Gorodetsky.
Ph. D.
Benedikt, Udo [Verfasser], Alexander A. [Akademischer Betreuer] Auer et Sibylle [Gutachter] Gemming. « Low-Rank Tensor Approximation in post Hartree-Fock Methods / Udo Benedikt ; Gutachter : Sibylle Gemming ; Betreuer : Alexander A. Auer ». Chemnitz : Universitätsbibliothek Chemnitz, 2014. http://d-nb.info/1230577440/34.
Texte intégralCordolino, Sobral Andrews. « Robust low-rank and sparse decomposition for moving object detection : from matrices to tensors ». Thesis, La Rochelle, 2017. http://www.theses.fr/2017LAROS007/document.
Texte intégralThis thesis introduces the recent advances on decomposition into low-rank plus sparse matrices and tensors, as well as the main contributions to face the principal issues in moving object detection. First, we present an overview of the state-of-the-art methods for low-rank and sparse decomposition, as well as their application to background modeling and foreground segmentation tasks. Next, we address the problem of background model initialization as a reconstruction process from missing/corrupted data. A novel methodology is presented showing an attractive potential for background modeling initialization in video surveillance. Subsequently, we propose a double-constrained version of robust principal component analysis to improve the foreground detection in maritime environments for automated video-surveillance applications. The algorithm makes use of double constraints extracted from spatial saliency maps to enhance object foreground detection in dynamic scenes. We also developed two incremental tensor-based algorithms in order to perform background/foreground separation from multidimensional streaming data. These works address the problem of low-rank and sparse decomposition on tensors. Finally, we present a particular work realized in conjunction with the Computer Vision Center (CVC) at Autonomous University of Barcelona (UAB)
Livres sur le sujet "Low-Rank Tensor"
Ashraphijuo, Morteza. Low-Rank Tensor Completion - Fundamental Limits and Efficient Algorithms. [New York, N.Y.?] : [publisher not identified], 2020.
Trouver le texte intégralLee, Namgil, Anh-Huy Phan, Danilo P. Mandic, Andrzej Cichocki et Ivan Oseledets. Tensor Networks for Dimensionality Reduction and Large-Scale Optimization : Part 1 Low-Rank Tensor Decompositions. Now Publishers, 2016.
Trouver le texte intégralChapitres de livres sur le sujet "Low-Rank Tensor"
Liu, Yipeng, Jiani Liu, Zhen Long et Ce Zhu. « Low-Rank Tensor Recovery ». Dans Tensor Computation for Data Analysis, 93–114. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74386-4_4.
Texte intégralZhong, Guoqiang, et Mohamed Cheriet. « Low Rank Tensor Manifold Learning ». Dans Low-Rank and Sparse Modeling for Visual Analysis, 133–50. Cham : Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12000-3_7.
Texte intégralSong, Zhao, David P. Woodruff et Peilin Zhong. « Relative Error Tensor Low Rank Approximation ». Dans Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, 2772–89. Philadelphia, PA : Society for Industrial and Applied Mathematics, 2019. http://dx.doi.org/10.1137/1.9781611975482.172.
Texte intégralChen, Wanli, Xinge Zhu, Ruoqi Sun, Junjun He, Ruiyu Li, Xiaoyong Shen et Bei Yu. « Tensor Low-Rank Reconstruction for Semantic Segmentation ». Dans Computer Vision – ECCV 2020, 52–69. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58520-4_4.
Texte intégralHarmouch, Jouhayna, Bernard Mourrain et Houssam Khalil. « Decomposition of Low Rank Multi-symmetric Tensor ». Dans Mathematical Aspects of Computer and Information Sciences, 51–66. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-72453-9_4.
Texte intégralGrasedyck, Lars, et Christian Löbbert. « Parallel Algorithms for Low Rank Tensor Arithmetic ». Dans Advances in Mechanics and Mathematics, 271–82. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-02487-1_16.
Texte intégralNouy, Anthony. « Low-Rank Tensor Methods for Model Order Reduction ». Dans Handbook of Uncertainty Quantification, 857–82. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-12385-1_21.
Texte intégralKressner, Daniel, et Francisco Macedo. « Low-Rank Tensor Methods for Communicating Markov Processes ». Dans Quantitative Evaluation of Systems, 25–40. Cham : Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10696-0_4.
Texte intégralNouy, Anthony. « Low-Rank Tensor Methods for Model Order Reduction ». Dans Handbook of Uncertainty Quantification, 1–26. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-11259-6_21-1.
Texte intégralPurohit, Antra, Abhishek, Rakesh et Shekhar Verma. « Optimal Low Rank Tensor Factorization for Deep Learning ». Dans Communications in Computer and Information Science, 476–84. Singapore : Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-2372-0_42.
Texte intégralActes de conférences sur le sujet "Low-Rank Tensor"
Javed, Sajid, Jorge Dias et Naoufel Werghi. « Low-Rank Tensor Tracking ». Dans 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, 2019. http://dx.doi.org/10.1109/iccvw.2019.00074.
Texte intégralPhan, Anh-Huy, Petr Tichavsky et Andrzej Cichocki. « Low rank tensor deconvolution ». Dans ICASSP 2015 - 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015. http://dx.doi.org/10.1109/icassp.2015.7178355.
Texte intégralShi, Yuqing, Shiqiang Du et Weilan Wang. « Robust Low-Rank and Sparse Tensor Decomposition for Low-Rank Tensor Completion ». Dans 2021 33rd Chinese Control and Decision Conference (CCDC). IEEE, 2021. http://dx.doi.org/10.1109/ccdc52312.2021.9601608.
Texte intégralWang, Zhanliang, Junyu Dong, Xinguo Liu et Xueying Zeng. « Low-Rank Tensor Completion by Approximating the Tensor Average Rank ». Dans 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.00457.
Texte intégralBazerque, Juan Andres, Gonzalo Mateos et Georgios B. Giannakis. « Nonparametric low-rank tensor imputation ». Dans 2012 IEEE Statistical Signal Processing Workshop (SSP). IEEE, 2012. http://dx.doi.org/10.1109/ssp.2012.6319847.
Texte intégralRibeiro, Lucas N., Andre L. F. de Almeida et Joao C. M. Mota. « Low-Rank Tensor MMSE Equalization ». Dans 2019 16th International Symposium on Wireless Communication Systems (ISWCS). IEEE, 2019. http://dx.doi.org/10.1109/iswcs.2019.8877123.
Texte intégralLiu, Han, Jing Liu et Liyu Su. « Adaptive Rank Estimation Based Tensor Factorization Algorithm for Low-Rank Tensor Completion ». Dans 2019 Chinese Control Conference (CCC). IEEE, 2019. http://dx.doi.org/10.23919/chicc.2019.8865482.
Texte intégralHaselby, Cullen, Santhosh Karnik et Mark Iwen. « Tensor Sandwich : Tensor Completion for Low CP-Rank Tensors via Adaptive Random Sampling ». Dans 2023 International Conference on Sampling Theory and Applications (SampTA). IEEE, 2023. http://dx.doi.org/10.1109/sampta59647.2023.10301204.
Texte intégralWang, Wenqi, Vaneet Aggarwal et Shuchin Aeron. « Efficient Low Rank Tensor Ring Completion ». Dans 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, 2017. http://dx.doi.org/10.1109/iccv.2017.607.
Texte intégralLi, Ping, Jiashi Feng, Xiaojie Jin, Luming Zhang, Xianghua Xu et Shuicheng Yan. « Online Robust Low-Rank Tensor Learning ». Dans Twenty-Sixth International Joint Conference on Artificial Intelligence. California : International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/303.
Texte intégral