Gotowa bibliografia na temat „Low-Rank Tensor”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Low-Rank Tensor”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Low-Rank Tensor"
Zhong, Guoqiang, i Mohamed Cheriet. "Large Margin Low Rank Tensor Analysis". Neural Computation 26, nr 4 (kwiecień 2014): 761–80. http://dx.doi.org/10.1162/neco_a_00570.
Pełny tekst źródłaLiu, Hongyi, Hanyang Li, Zebin Wu i Zhihui Wei. "Hyperspectral Image Recovery Using Non-Convex Low-Rank Tensor Approximation". Remote Sensing 12, nr 14 (15.07.2020): 2264. http://dx.doi.org/10.3390/rs12142264.
Pełny tekst źródłaZhou, Pan, Canyi Lu, Zhouchen Lin i Chao Zhang. "Tensor Factorization for Low-Rank Tensor Completion". IEEE Transactions on Image Processing 27, nr 3 (marzec 2018): 1152–63. http://dx.doi.org/10.1109/tip.2017.2762595.
Pełny tekst źródłaHe, Yicong, i George K. Atia. "Multi-Mode Tensor Space Clustering Based on Low-Tensor-Rank Representation". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 6 (28.06.2022): 6893–901. http://dx.doi.org/10.1609/aaai.v36i6.20646.
Pełny tekst źródłaLiu, Xiaohua, i Guijin Tang. "Color Image Restoration Using Sub-Image Based Low-Rank Tensor Completion". Sensors 23, nr 3 (3.02.2023): 1706. http://dx.doi.org/10.3390/s23031706.
Pełny tekst źródłaJiang, Yuanxiang, Qixiang Zhang, Zhanjiang Yuan i Chen Wang. "Convex Robust Recovery of Corrupted Tensors via Tensor Singular Value Decomposition and Local Low-Rank Approximation". Journal of Physics: Conference Series 2670, nr 1 (1.12.2023): 012026. http://dx.doi.org/10.1088/1742-6596/2670/1/012026.
Pełny tekst źródłaYu, Shicheng, Jiaqing Miao, Guibing Li, Weidong Jin, Gaoping Li i Xiaoguang Liu. "Tensor Completion via Smooth Rank Function Low-Rank Approximate Regularization". Remote Sensing 15, nr 15 (3.08.2023): 3862. http://dx.doi.org/10.3390/rs15153862.
Pełny tekst źródłaNie, Jiawang. "Low Rank Symmetric Tensor Approximations". SIAM Journal on Matrix Analysis and Applications 38, nr 4 (styczeń 2017): 1517–40. http://dx.doi.org/10.1137/16m1107528.
Pełny tekst źródłaMickelin, Oscar, i Sertac Karaman. "Multiresolution Low-rank Tensor Formats". SIAM Journal on Matrix Analysis and Applications 41, nr 3 (styczeń 2020): 1086–114. http://dx.doi.org/10.1137/19m1284579.
Pełny tekst źródłaGong, Xiao, Wei Chen, Jie Chen i Bo Ai. "Tensor Denoising Using Low-Rank Tensor Train Decomposition". IEEE Signal Processing Letters 27 (2020): 1685–89. http://dx.doi.org/10.1109/lsp.2020.3025038.
Pełny tekst źródłaRozprawy doktorskie na temat "Low-Rank Tensor"
Stojanac, Željka [Verfasser]. "Low-rank Tensor Recovery / Željka Stojanac". Bonn : Universitäts- und Landesbibliothek Bonn, 2016. http://d-nb.info/1119888565/34.
Pełny tekst źródłaShi, Qiquan. "Low rank tensor decomposition for feature extraction and tensor recovery". HKBU Institutional Repository, 2018. https://repository.hkbu.edu.hk/etd_oa/549.
Pełny tekst źródłaHan, Xu. "Robust low-rank tensor approximations using group sparsity". Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S001/document.
Pełny tekst źródłaLast decades, tensor decompositions have gained in popularity in several application domains. Most of the existing tensor decomposition methods require an estimating of the tensor rank in a preprocessing step to guarantee an outstanding decomposition results. Unfortunately, learning the exact rank of the tensor can be difficult in some particular cases, such as for low signal to noise ratio values. The objective of this thesis is to compute the best low-rank tensor approximation by a joint estimation of the rank and the loading matrices from the noisy tensor. Based on the low-rank property and an over estimation of the loading matrices or the core tensor, this joint estimation problem is solved by promoting group sparsity of over-estimated loading matrices and/or the core tensor. More particularly, three new methods are proposed to achieve efficient low rank estimation for three different tensors decomposition models, namely Canonical Polyadic Decomposition (CPD), Block Term Decomposition (BTD) and Multilinear Tensor Decomposition (MTD). All the proposed methods consist of two steps: the first step is designed to estimate the rank, and the second step uses the estimated rank to compute accurately the loading matrices. Numerical simulations with noisy tensor and results on real data the show effectiveness of the proposed methods compared to the state-of-the-art methods
Benedikt, Udo. "Low-Rank Tensor Approximation in post Hartree-Fock Methods". Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-133194.
Pełny tekst źródłaDie vorliegende Arbeit beschäftigt sich mit der Anwendung neuartiger Tensorzerlegungs- und Tensorrepesentationstechniken in hochgenauen post Hartree-Fock Methoden um das hohe Skalierungsverhalten dieser Verfahren mit steigender Systemgröße zu verringern und somit den "Fluch der Dimensionen" zu brechen. Nach einer vergleichenden Betrachtung verschiedener Representationsformate wird auf die Anwendung des "canonical polyadic" Formates (CP) detailliert eingegangen. Dabei stehen zunächst die Umwandlung eines normalen, indexbasierten Tensors in das CP Format (Tensorzerlegung) und eine Methode der Niedrigrang Approximation (Rangreduktion) für Zweielektronenintegrale in der AO Basis im Vordergrund. Die entscheidende Größe für die Anwendbarkeit ist dabei das Skalierungsverhalten das Ranges mit steigender System- und Basissatzgröße, da der Speicheraufwand und die Berechnungskosten für Tensormanipulationen im CP Format zwar nur noch linear von der Anzahl der Dimensionen des Tensors abhängen, allerdings auch mit der Expansionslänge (Rang) skalieren. Im Anschluss wird die AO-MO Transformation und der MP2 Algorithmus mit zerlegten Tensoren im CP Format diskutiert und erneut das Skalierungsverhalten mit steigender System- und Basissatzgröße untersucht. Abschließend wird ein Coupled-Cluster Algorithmus vorgestellt, welcher ausschließlich mit Tensoren in einer Niedrigrang CP Darstellung arbeitet. Dabei wird vor allem auf die sukzessive Tensorkontraktion während der iterativen Bestimmung der Amplituden eingegangen und die Fehlerfortpanzung durch Anwendung des Rangreduktions-Algorithmus analysiert. Abschließend wird die Komplexität des gesamten Verfahrens bewertet und Verbesserungsmöglichkeiten der Reduktionsprozedur aufgezeigt
Rabusseau, Guillaume. "A tensor perspective on weighted automata, low-rank regression and algebraic mixtures". Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM4062.
Pełny tekst źródłaThis thesis tackles several problems exploring connections between tensors and machine learning. In the first chapter, we propose an extension of the classical notion of recognizable function on strings and trees to graphs. We first show that the computations of weighted automata on strings and trees can be interpreted in a natural and unifying way using tensor networks, which naturally leads us to define a computational model on graphs: graph weighted models; we then study fundamental properties of this model and present preliminary learning results. The second chapter tackles a model reduction problem for weighted tree automata. We propose a principled approach to the following problem: given a weighted tree automaton with n states, how can we find an automaton with m
Alora, John Irvin P. "Automated synthesis of low-rank stochastic dynamical systems using the tensor-train decomposition". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105006.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 79-83).
Cyber-physical systems are increasingly becoming integrated in various fields such as medicine, finance, robotics, and energy. In these systems and their applications, safety and correctness of operation is of primary concern, sparking a large amount of interest in the development of ways to verify system behavior. The tight coupling of physical constraints and computation that typically characterize cyber-physical systems make them extremely complex, resulting in unexpected failure modes. Furthermore, disturbances in the environment and uncertainties in the physical model require these systems to be robust. These are difficult constraints, requiring cyberphysical systems to be able to reason about their behavior and respond to events in real-time. Thus, the goal of automated synthesis is to construct a controller that provably implements a range of behaviors given by a specification of how the system should operate. Unfortunately, many approaches to automated synthesis are ad hoc and are limited to simple systems that admit specific structure (e.g. linear, affine systems). Not only that, but they are also designed without taking into account uncertainty. In order to tackle more general problems, several computational frameworks that allow for more general dynamics and uncertainty to be investigated. Furthermore, all of the existing computational algorithms suffer from the curse of dimensionality, the run time scales exponentially with increasing dimensionality of the state space. As a result, existing algorithms apply to systems with only a few degrees of freedom. In this thesis, we consider a stochastic optimal control problem with a special class of linear temporal logic specifications and propose a novel algorithm based on the tensor-train decomposition. We prove that the run time of the proposed algorithm scales linearly with the dimensionality of the state space and polynomially with the rank of the optimal cost-to-go function.
by John Irvin P. Alora.
S.M.
Ceruti, Gianluca [Verfasser]. "Unconventional contributions to dynamical low-rank approximation of tree tensor networks / Gianluca Ceruti". Tübingen : Universitätsbibliothek Tübingen, 2021. http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-1186805.
Pełny tekst źródłaGorodetsky, Alex Arkady. "Continuous low-rank tensor decompositions, with applications to stochastic optimal control and data assimilation". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108918.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages 205-214).
Optimal decision making under uncertainty is critical for control and optimization of complex systems. However, many techniques for solving problems such as stochastic optimal control and data assimilation encounter the curse of dimensionality when too many state variables are involved. In this thesis, we propose a framework for computing with high-dimensional functions that mitigates this exponential growth in complexity for problems with separable structure. Our framework tightly integrates two emerging areas: tensor decompositions and continuous computation. Tensor decompositions are able to effectively compress and operate with low-rank multidimensional arrays. Continuous computation is a paradigm for computing with functions instead of arrays, and it is best realized by Chebfun, a MATLAB package for computing with functions of up to three dimensions. Continuous computation provides a natural framework for building numerical algorithms that effectively, naturally, and automatically adapt to problem structure. The first part of this thesis describes a compressed continuous computation framework centered around a continuous analogue to the (discrete) tensor-train decomposition called the function-train decomposition. Computation with the function-train requires continuous matrix factorizations and continuous numerical linear algebra. Continuous analogues are presented for performing cross approximation; rounding; multilinear algebra operations such as addition, multiplication, integration, and differentiation; and continuous, rank-revealing, alternating least squares. Advantages of the function-train over the tensor-train include the ability to adaptively approximate functions and the ability to compute with functions that are parameterized differently. For example, while elementwise multiplication between tensors of different sizes is undefined, functions in FT format can be readily multiplied together. Next, we develop compressed versions of value iteration, policy iteration, and multilevel algorithms for solving dynamic programming problems arising in stochastic optimal control. These techniques enable computing global solutions to a broader set of problems, for example those with non-affine control inputs, than previously possible. Examples are presented for motion planning with robotic systems that have up to seven states. Finally, we use the FT to extend integration-based Gaussian filtering to larger state spaces than previously considered. Examples are presented for dynamical systems with up to twenty states.
by Alex Arkady Gorodetsky.
Ph. D.
Benedikt, Udo [Verfasser], Alexander A. [Akademischer Betreuer] Auer i Sibylle [Gutachter] Gemming. "Low-Rank Tensor Approximation in post Hartree-Fock Methods / Udo Benedikt ; Gutachter: Sibylle Gemming ; Betreuer: Alexander A. Auer". Chemnitz : Universitätsbibliothek Chemnitz, 2014. http://d-nb.info/1230577440/34.
Pełny tekst źródłaCordolino, Sobral Andrews. "Robust low-rank and sparse decomposition for moving object detection : from matrices to tensors". Thesis, La Rochelle, 2017. http://www.theses.fr/2017LAROS007/document.
Pełny tekst źródłaThis thesis introduces the recent advances on decomposition into low-rank plus sparse matrices and tensors, as well as the main contributions to face the principal issues in moving object detection. First, we present an overview of the state-of-the-art methods for low-rank and sparse decomposition, as well as their application to background modeling and foreground segmentation tasks. Next, we address the problem of background model initialization as a reconstruction process from missing/corrupted data. A novel methodology is presented showing an attractive potential for background modeling initialization in video surveillance. Subsequently, we propose a double-constrained version of robust principal component analysis to improve the foreground detection in maritime environments for automated video-surveillance applications. The algorithm makes use of double constraints extracted from spatial saliency maps to enhance object foreground detection in dynamic scenes. We also developed two incremental tensor-based algorithms in order to perform background/foreground separation from multidimensional streaming data. These works address the problem of low-rank and sparse decomposition on tensors. Finally, we present a particular work realized in conjunction with the Computer Vision Center (CVC) at Autonomous University of Barcelona (UAB)
Książki na temat "Low-Rank Tensor"
Ashraphijuo, Morteza. Low-Rank Tensor Completion - Fundamental Limits and Efficient Algorithms. [New York, N.Y.?]: [publisher not identified], 2020.
Znajdź pełny tekst źródłaLee, Namgil, Anh-Huy Phan, Danilo P. Mandic, Andrzej Cichocki i Ivan Oseledets. Tensor Networks for Dimensionality Reduction and Large-Scale Optimization: Part 1 Low-Rank Tensor Decompositions. Now Publishers, 2016.
Znajdź pełny tekst źródłaCzęści książek na temat "Low-Rank Tensor"
Liu, Yipeng, Jiani Liu, Zhen Long i Ce Zhu. "Low-Rank Tensor Recovery". W Tensor Computation for Data Analysis, 93–114. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74386-4_4.
Pełny tekst źródłaZhong, Guoqiang, i Mohamed Cheriet. "Low Rank Tensor Manifold Learning". W Low-Rank and Sparse Modeling for Visual Analysis, 133–50. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12000-3_7.
Pełny tekst źródłaSong, Zhao, David P. Woodruff i Peilin Zhong. "Relative Error Tensor Low Rank Approximation". W Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, 2772–89. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2019. http://dx.doi.org/10.1137/1.9781611975482.172.
Pełny tekst źródłaChen, Wanli, Xinge Zhu, Ruoqi Sun, Junjun He, Ruiyu Li, Xiaoyong Shen i Bei Yu. "Tensor Low-Rank Reconstruction for Semantic Segmentation". W Computer Vision – ECCV 2020, 52–69. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58520-4_4.
Pełny tekst źródłaHarmouch, Jouhayna, Bernard Mourrain i Houssam Khalil. "Decomposition of Low Rank Multi-symmetric Tensor". W Mathematical Aspects of Computer and Information Sciences, 51–66. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-72453-9_4.
Pełny tekst źródłaGrasedyck, Lars, i Christian Löbbert. "Parallel Algorithms for Low Rank Tensor Arithmetic". W Advances in Mechanics and Mathematics, 271–82. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-02487-1_16.
Pełny tekst źródłaNouy, Anthony. "Low-Rank Tensor Methods for Model Order Reduction". W Handbook of Uncertainty Quantification, 857–82. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-12385-1_21.
Pełny tekst źródłaKressner, Daniel, i Francisco Macedo. "Low-Rank Tensor Methods for Communicating Markov Processes". W Quantitative Evaluation of Systems, 25–40. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10696-0_4.
Pełny tekst źródłaNouy, Anthony. "Low-Rank Tensor Methods for Model Order Reduction". W Handbook of Uncertainty Quantification, 1–26. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-11259-6_21-1.
Pełny tekst źródłaPurohit, Antra, Abhishek, Rakesh i Shekhar Verma. "Optimal Low Rank Tensor Factorization for Deep Learning". W Communications in Computer and Information Science, 476–84. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-2372-0_42.
Pełny tekst źródłaStreszczenia konferencji na temat "Low-Rank Tensor"
Javed, Sajid, Jorge Dias i Naoufel Werghi. "Low-Rank Tensor Tracking". W 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, 2019. http://dx.doi.org/10.1109/iccvw.2019.00074.
Pełny tekst źródłaPhan, Anh-Huy, Petr Tichavsky i Andrzej Cichocki. "Low rank tensor deconvolution". W ICASSP 2015 - 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015. http://dx.doi.org/10.1109/icassp.2015.7178355.
Pełny tekst źródłaShi, Yuqing, Shiqiang Du i Weilan Wang. "Robust Low-Rank and Sparse Tensor Decomposition for Low-Rank Tensor Completion". W 2021 33rd Chinese Control and Decision Conference (CCDC). IEEE, 2021. http://dx.doi.org/10.1109/ccdc52312.2021.9601608.
Pełny tekst źródłaWang, Zhanliang, Junyu Dong, Xinguo Liu i Xueying Zeng. "Low-Rank Tensor Completion by Approximating the Tensor Average Rank". W 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.00457.
Pełny tekst źródłaBazerque, Juan Andres, Gonzalo Mateos i Georgios B. Giannakis. "Nonparametric low-rank tensor imputation". W 2012 IEEE Statistical Signal Processing Workshop (SSP). IEEE, 2012. http://dx.doi.org/10.1109/ssp.2012.6319847.
Pełny tekst źródłaRibeiro, Lucas N., Andre L. F. de Almeida i Joao C. M. Mota. "Low-Rank Tensor MMSE Equalization". W 2019 16th International Symposium on Wireless Communication Systems (ISWCS). IEEE, 2019. http://dx.doi.org/10.1109/iswcs.2019.8877123.
Pełny tekst źródłaLiu, Han, Jing Liu i Liyu Su. "Adaptive Rank Estimation Based Tensor Factorization Algorithm for Low-Rank Tensor Completion". W 2019 Chinese Control Conference (CCC). IEEE, 2019. http://dx.doi.org/10.23919/chicc.2019.8865482.
Pełny tekst źródłaHaselby, Cullen, Santhosh Karnik i Mark Iwen. "Tensor Sandwich: Tensor Completion for Low CP-Rank Tensors via Adaptive Random Sampling". W 2023 International Conference on Sampling Theory and Applications (SampTA). IEEE, 2023. http://dx.doi.org/10.1109/sampta59647.2023.10301204.
Pełny tekst źródłaWang, Wenqi, Vaneet Aggarwal i Shuchin Aeron. "Efficient Low Rank Tensor Ring Completion". W 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, 2017. http://dx.doi.org/10.1109/iccv.2017.607.
Pełny tekst źródłaLi, Ping, Jiashi Feng, Xiaojie Jin, Luming Zhang, Xianghua Xu i Shuicheng Yan. "Online Robust Low-Rank Tensor Learning". W Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/303.
Pełny tekst źródła