Academic literature on the topic 'Low-Rank Tensor'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Low-Rank Tensor.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Low-Rank Tensor"
Zhong, Guoqiang, and Mohamed Cheriet. "Large Margin Low Rank Tensor Analysis." Neural Computation 26, no. 4 (April 2014): 761–80. http://dx.doi.org/10.1162/neco_a_00570.
Full textLiu, Hongyi, Hanyang Li, Zebin Wu, and Zhihui Wei. "Hyperspectral Image Recovery Using Non-Convex Low-Rank Tensor Approximation." Remote Sensing 12, no. 14 (July 15, 2020): 2264. http://dx.doi.org/10.3390/rs12142264.
Full textZhou, Pan, Canyi Lu, Zhouchen Lin, and Chao Zhang. "Tensor Factorization for Low-Rank Tensor Completion." IEEE Transactions on Image Processing 27, no. 3 (March 2018): 1152–63. http://dx.doi.org/10.1109/tip.2017.2762595.
Full textHe, Yicong, and George K. Atia. "Multi-Mode Tensor Space Clustering Based on Low-Tensor-Rank Representation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 6893–901. http://dx.doi.org/10.1609/aaai.v36i6.20646.
Full textLiu, Xiaohua, and Guijin Tang. "Color Image Restoration Using Sub-Image Based Low-Rank Tensor Completion." Sensors 23, no. 3 (February 3, 2023): 1706. http://dx.doi.org/10.3390/s23031706.
Full textJiang, Yuanxiang, Qixiang Zhang, Zhanjiang Yuan, and Chen Wang. "Convex Robust Recovery of Corrupted Tensors via Tensor Singular Value Decomposition and Local Low-Rank Approximation." Journal of Physics: Conference Series 2670, no. 1 (December 1, 2023): 012026. http://dx.doi.org/10.1088/1742-6596/2670/1/012026.
Full textYu, Shicheng, Jiaqing Miao, Guibing Li, Weidong Jin, Gaoping Li, and Xiaoguang Liu. "Tensor Completion via Smooth Rank Function Low-Rank Approximate Regularization." Remote Sensing 15, no. 15 (August 3, 2023): 3862. http://dx.doi.org/10.3390/rs15153862.
Full textNie, Jiawang. "Low Rank Symmetric Tensor Approximations." SIAM Journal on Matrix Analysis and Applications 38, no. 4 (January 2017): 1517–40. http://dx.doi.org/10.1137/16m1107528.
Full textMickelin, Oscar, and Sertac Karaman. "Multiresolution Low-rank Tensor Formats." SIAM Journal on Matrix Analysis and Applications 41, no. 3 (January 2020): 1086–114. http://dx.doi.org/10.1137/19m1284579.
Full textGong, Xiao, Wei Chen, Jie Chen, and Bo Ai. "Tensor Denoising Using Low-Rank Tensor Train Decomposition." IEEE Signal Processing Letters 27 (2020): 1685–89. http://dx.doi.org/10.1109/lsp.2020.3025038.
Full textDissertations / Theses on the topic "Low-Rank Tensor"
Stojanac, Željka [Verfasser]. "Low-rank Tensor Recovery / Željka Stojanac." Bonn : Universitäts- und Landesbibliothek Bonn, 2016. http://d-nb.info/1119888565/34.
Full textShi, Qiquan. "Low rank tensor decomposition for feature extraction and tensor recovery." HKBU Institutional Repository, 2018. https://repository.hkbu.edu.hk/etd_oa/549.
Full textHan, Xu. "Robust low-rank tensor approximations using group sparsity." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S001/document.
Full textLast decades, tensor decompositions have gained in popularity in several application domains. Most of the existing tensor decomposition methods require an estimating of the tensor rank in a preprocessing step to guarantee an outstanding decomposition results. Unfortunately, learning the exact rank of the tensor can be difficult in some particular cases, such as for low signal to noise ratio values. The objective of this thesis is to compute the best low-rank tensor approximation by a joint estimation of the rank and the loading matrices from the noisy tensor. Based on the low-rank property and an over estimation of the loading matrices or the core tensor, this joint estimation problem is solved by promoting group sparsity of over-estimated loading matrices and/or the core tensor. More particularly, three new methods are proposed to achieve efficient low rank estimation for three different tensors decomposition models, namely Canonical Polyadic Decomposition (CPD), Block Term Decomposition (BTD) and Multilinear Tensor Decomposition (MTD). All the proposed methods consist of two steps: the first step is designed to estimate the rank, and the second step uses the estimated rank to compute accurately the loading matrices. Numerical simulations with noisy tensor and results on real data the show effectiveness of the proposed methods compared to the state-of-the-art methods
Benedikt, Udo. "Low-Rank Tensor Approximation in post Hartree-Fock Methods." Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-133194.
Full textDie vorliegende Arbeit beschäftigt sich mit der Anwendung neuartiger Tensorzerlegungs- und Tensorrepesentationstechniken in hochgenauen post Hartree-Fock Methoden um das hohe Skalierungsverhalten dieser Verfahren mit steigender Systemgröße zu verringern und somit den "Fluch der Dimensionen" zu brechen. Nach einer vergleichenden Betrachtung verschiedener Representationsformate wird auf die Anwendung des "canonical polyadic" Formates (CP) detailliert eingegangen. Dabei stehen zunächst die Umwandlung eines normalen, indexbasierten Tensors in das CP Format (Tensorzerlegung) und eine Methode der Niedrigrang Approximation (Rangreduktion) für Zweielektronenintegrale in der AO Basis im Vordergrund. Die entscheidende Größe für die Anwendbarkeit ist dabei das Skalierungsverhalten das Ranges mit steigender System- und Basissatzgröße, da der Speicheraufwand und die Berechnungskosten für Tensormanipulationen im CP Format zwar nur noch linear von der Anzahl der Dimensionen des Tensors abhängen, allerdings auch mit der Expansionslänge (Rang) skalieren. Im Anschluss wird die AO-MO Transformation und der MP2 Algorithmus mit zerlegten Tensoren im CP Format diskutiert und erneut das Skalierungsverhalten mit steigender System- und Basissatzgröße untersucht. Abschließend wird ein Coupled-Cluster Algorithmus vorgestellt, welcher ausschließlich mit Tensoren in einer Niedrigrang CP Darstellung arbeitet. Dabei wird vor allem auf die sukzessive Tensorkontraktion während der iterativen Bestimmung der Amplituden eingegangen und die Fehlerfortpanzung durch Anwendung des Rangreduktions-Algorithmus analysiert. Abschließend wird die Komplexität des gesamten Verfahrens bewertet und Verbesserungsmöglichkeiten der Reduktionsprozedur aufgezeigt
Rabusseau, Guillaume. "A tensor perspective on weighted automata, low-rank regression and algebraic mixtures." Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM4062.
Full textThis thesis tackles several problems exploring connections between tensors and machine learning. In the first chapter, we propose an extension of the classical notion of recognizable function on strings and trees to graphs. We first show that the computations of weighted automata on strings and trees can be interpreted in a natural and unifying way using tensor networks, which naturally leads us to define a computational model on graphs: graph weighted models; we then study fundamental properties of this model and present preliminary learning results. The second chapter tackles a model reduction problem for weighted tree automata. We propose a principled approach to the following problem: given a weighted tree automaton with n states, how can we find an automaton with m
Alora, John Irvin P. "Automated synthesis of low-rank stochastic dynamical systems using the tensor-train decomposition." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105006.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 79-83).
Cyber-physical systems are increasingly becoming integrated in various fields such as medicine, finance, robotics, and energy. In these systems and their applications, safety and correctness of operation is of primary concern, sparking a large amount of interest in the development of ways to verify system behavior. The tight coupling of physical constraints and computation that typically characterize cyber-physical systems make them extremely complex, resulting in unexpected failure modes. Furthermore, disturbances in the environment and uncertainties in the physical model require these systems to be robust. These are difficult constraints, requiring cyberphysical systems to be able to reason about their behavior and respond to events in real-time. Thus, the goal of automated synthesis is to construct a controller that provably implements a range of behaviors given by a specification of how the system should operate. Unfortunately, many approaches to automated synthesis are ad hoc and are limited to simple systems that admit specific structure (e.g. linear, affine systems). Not only that, but they are also designed without taking into account uncertainty. In order to tackle more general problems, several computational frameworks that allow for more general dynamics and uncertainty to be investigated. Furthermore, all of the existing computational algorithms suffer from the curse of dimensionality, the run time scales exponentially with increasing dimensionality of the state space. As a result, existing algorithms apply to systems with only a few degrees of freedom. In this thesis, we consider a stochastic optimal control problem with a special class of linear temporal logic specifications and propose a novel algorithm based on the tensor-train decomposition. We prove that the run time of the proposed algorithm scales linearly with the dimensionality of the state space and polynomially with the rank of the optimal cost-to-go function.
by John Irvin P. Alora.
S.M.
Ceruti, Gianluca [Verfasser]. "Unconventional contributions to dynamical low-rank approximation of tree tensor networks / Gianluca Ceruti." Tübingen : Universitätsbibliothek Tübingen, 2021. http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-1186805.
Full textGorodetsky, Alex Arkady. "Continuous low-rank tensor decompositions, with applications to stochastic optimal control and data assimilation." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108918.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 205-214).
Optimal decision making under uncertainty is critical for control and optimization of complex systems. However, many techniques for solving problems such as stochastic optimal control and data assimilation encounter the curse of dimensionality when too many state variables are involved. In this thesis, we propose a framework for computing with high-dimensional functions that mitigates this exponential growth in complexity for problems with separable structure. Our framework tightly integrates two emerging areas: tensor decompositions and continuous computation. Tensor decompositions are able to effectively compress and operate with low-rank multidimensional arrays. Continuous computation is a paradigm for computing with functions instead of arrays, and it is best realized by Chebfun, a MATLAB package for computing with functions of up to three dimensions. Continuous computation provides a natural framework for building numerical algorithms that effectively, naturally, and automatically adapt to problem structure. The first part of this thesis describes a compressed continuous computation framework centered around a continuous analogue to the (discrete) tensor-train decomposition called the function-train decomposition. Computation with the function-train requires continuous matrix factorizations and continuous numerical linear algebra. Continuous analogues are presented for performing cross approximation; rounding; multilinear algebra operations such as addition, multiplication, integration, and differentiation; and continuous, rank-revealing, alternating least squares. Advantages of the function-train over the tensor-train include the ability to adaptively approximate functions and the ability to compute with functions that are parameterized differently. For example, while elementwise multiplication between tensors of different sizes is undefined, functions in FT format can be readily multiplied together. Next, we develop compressed versions of value iteration, policy iteration, and multilevel algorithms for solving dynamic programming problems arising in stochastic optimal control. These techniques enable computing global solutions to a broader set of problems, for example those with non-affine control inputs, than previously possible. Examples are presented for motion planning with robotic systems that have up to seven states. Finally, we use the FT to extend integration-based Gaussian filtering to larger state spaces than previously considered. Examples are presented for dynamical systems with up to twenty states.
by Alex Arkady Gorodetsky.
Ph. D.
Benedikt, Udo [Verfasser], Alexander A. [Akademischer Betreuer] Auer, and Sibylle [Gutachter] Gemming. "Low-Rank Tensor Approximation in post Hartree-Fock Methods / Udo Benedikt ; Gutachter: Sibylle Gemming ; Betreuer: Alexander A. Auer." Chemnitz : Universitätsbibliothek Chemnitz, 2014. http://d-nb.info/1230577440/34.
Full textCordolino, Sobral Andrews. "Robust low-rank and sparse decomposition for moving object detection : from matrices to tensors." Thesis, La Rochelle, 2017. http://www.theses.fr/2017LAROS007/document.
Full textThis thesis introduces the recent advances on decomposition into low-rank plus sparse matrices and tensors, as well as the main contributions to face the principal issues in moving object detection. First, we present an overview of the state-of-the-art methods for low-rank and sparse decomposition, as well as their application to background modeling and foreground segmentation tasks. Next, we address the problem of background model initialization as a reconstruction process from missing/corrupted data. A novel methodology is presented showing an attractive potential for background modeling initialization in video surveillance. Subsequently, we propose a double-constrained version of robust principal component analysis to improve the foreground detection in maritime environments for automated video-surveillance applications. The algorithm makes use of double constraints extracted from spatial saliency maps to enhance object foreground detection in dynamic scenes. We also developed two incremental tensor-based algorithms in order to perform background/foreground separation from multidimensional streaming data. These works address the problem of low-rank and sparse decomposition on tensors. Finally, we present a particular work realized in conjunction with the Computer Vision Center (CVC) at Autonomous University of Barcelona (UAB)
Books on the topic "Low-Rank Tensor"
Ashraphijuo, Morteza. Low-Rank Tensor Completion - Fundamental Limits and Efficient Algorithms. [New York, N.Y.?]: [publisher not identified], 2020.
Find full textLee, Namgil, Anh-Huy Phan, Danilo P. Mandic, Andrzej Cichocki, and Ivan Oseledets. Tensor Networks for Dimensionality Reduction and Large-Scale Optimization: Part 1 Low-Rank Tensor Decompositions. Now Publishers, 2016.
Find full textBook chapters on the topic "Low-Rank Tensor"
Liu, Yipeng, Jiani Liu, Zhen Long, and Ce Zhu. "Low-Rank Tensor Recovery." In Tensor Computation for Data Analysis, 93–114. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74386-4_4.
Full textZhong, Guoqiang, and Mohamed Cheriet. "Low Rank Tensor Manifold Learning." In Low-Rank and Sparse Modeling for Visual Analysis, 133–50. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12000-3_7.
Full textSong, Zhao, David P. Woodruff, and Peilin Zhong. "Relative Error Tensor Low Rank Approximation." In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, 2772–89. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2019. http://dx.doi.org/10.1137/1.9781611975482.172.
Full textChen, Wanli, Xinge Zhu, Ruoqi Sun, Junjun He, Ruiyu Li, Xiaoyong Shen, and Bei Yu. "Tensor Low-Rank Reconstruction for Semantic Segmentation." In Computer Vision – ECCV 2020, 52–69. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58520-4_4.
Full textHarmouch, Jouhayna, Bernard Mourrain, and Houssam Khalil. "Decomposition of Low Rank Multi-symmetric Tensor." In Mathematical Aspects of Computer and Information Sciences, 51–66. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-72453-9_4.
Full textGrasedyck, Lars, and Christian Löbbert. "Parallel Algorithms for Low Rank Tensor Arithmetic." In Advances in Mechanics and Mathematics, 271–82. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-02487-1_16.
Full textNouy, Anthony. "Low-Rank Tensor Methods for Model Order Reduction." In Handbook of Uncertainty Quantification, 857–82. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-12385-1_21.
Full textKressner, Daniel, and Francisco Macedo. "Low-Rank Tensor Methods for Communicating Markov Processes." In Quantitative Evaluation of Systems, 25–40. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10696-0_4.
Full textNouy, Anthony. "Low-Rank Tensor Methods for Model Order Reduction." In Handbook of Uncertainty Quantification, 1–26. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-11259-6_21-1.
Full textPurohit, Antra, Abhishek, Rakesh, and Shekhar Verma. "Optimal Low Rank Tensor Factorization for Deep Learning." In Communications in Computer and Information Science, 476–84. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-2372-0_42.
Full textConference papers on the topic "Low-Rank Tensor"
Javed, Sajid, Jorge Dias, and Naoufel Werghi. "Low-Rank Tensor Tracking." In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, 2019. http://dx.doi.org/10.1109/iccvw.2019.00074.
Full textPhan, Anh-Huy, Petr Tichavsky, and Andrzej Cichocki. "Low rank tensor deconvolution." In ICASSP 2015 - 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015. http://dx.doi.org/10.1109/icassp.2015.7178355.
Full textShi, Yuqing, Shiqiang Du, and Weilan Wang. "Robust Low-Rank and Sparse Tensor Decomposition for Low-Rank Tensor Completion." In 2021 33rd Chinese Control and Decision Conference (CCDC). IEEE, 2021. http://dx.doi.org/10.1109/ccdc52312.2021.9601608.
Full textWang, Zhanliang, Junyu Dong, Xinguo Liu, and Xueying Zeng. "Low-Rank Tensor Completion by Approximating the Tensor Average Rank." In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.00457.
Full textBazerque, Juan Andres, Gonzalo Mateos, and Georgios B. Giannakis. "Nonparametric low-rank tensor imputation." In 2012 IEEE Statistical Signal Processing Workshop (SSP). IEEE, 2012. http://dx.doi.org/10.1109/ssp.2012.6319847.
Full textRibeiro, Lucas N., Andre L. F. de Almeida, and Joao C. M. Mota. "Low-Rank Tensor MMSE Equalization." In 2019 16th International Symposium on Wireless Communication Systems (ISWCS). IEEE, 2019. http://dx.doi.org/10.1109/iswcs.2019.8877123.
Full textLiu, Han, Jing Liu, and Liyu Su. "Adaptive Rank Estimation Based Tensor Factorization Algorithm for Low-Rank Tensor Completion." In 2019 Chinese Control Conference (CCC). IEEE, 2019. http://dx.doi.org/10.23919/chicc.2019.8865482.
Full textHaselby, Cullen, Santhosh Karnik, and Mark Iwen. "Tensor Sandwich: Tensor Completion for Low CP-Rank Tensors via Adaptive Random Sampling." In 2023 International Conference on Sampling Theory and Applications (SampTA). IEEE, 2023. http://dx.doi.org/10.1109/sampta59647.2023.10301204.
Full textWang, Wenqi, Vaneet Aggarwal, and Shuchin Aeron. "Efficient Low Rank Tensor Ring Completion." In 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, 2017. http://dx.doi.org/10.1109/iccv.2017.607.
Full textLi, Ping, Jiashi Feng, Xiaojie Jin, Luming Zhang, Xianghua Xu, and Shuicheng Yan. "Online Robust Low-Rank Tensor Learning." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/303.
Full text