Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Decomposition de tenseur.

Статті в журналах з теми "Decomposition de tenseur"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Decomposition de tenseur".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Zheng, Yu-Bang, Ting-Zhu Huang, Xi-Le Zhao, Qibin Zhao, and Tai-Xiang Jiang. "Fully-Connected Tensor Network Decomposition and Its Application to Higher-Order Tensor Completion." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 11071–78. http://dx.doi.org/10.1609/aaai.v35i12.17321.

Повний текст джерела
Анотація:
The popular tensor train (TT) and tensor ring (TR) decompositions have achieved promising results in science and engineering. However, TT and TR decompositions only establish an operation between adjacent two factors and are highly sensitive to the permutation of tensor modes, leading to an inadequate and inflexible representation. In this paper, we propose a generalized tensor decomposition, which decomposes an Nth-order tensor into a set of Nth-order factors and establishes an operation between any two factors. Since it can be graphically interpreted as a fully-connected network, we named it fully-connected tensor network (FCTN) decomposition. The superiorities of the FCTN decomposition lie in the outstanding capability for characterizing adequately the intrinsic correlations between any two modes of tensors and the essential invariance for transposition. Furthermore, we employ the FCTN decomposition to one representative task, i.e., tensor completion, and develop an efficient solving algorithm based on proximal alternating minimization. Theoretically, we prove the convergence of the developed algorithm, i.e., the sequence obtained by it globally converges to a critical point. Experimental results substantiate that the proposed method compares favorably to the state-of-the-art methods based on other tensor decompositions.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Luo, Dijun, Chris Ding, and Heng Huang. "Multi-Level Cluster Indicator Decompositions of Matrices and Tensors." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (August 4, 2011): 423–28. http://dx.doi.org/10.1609/aaai.v25i1.7933.

Повний текст джерела
Анотація:
A main challenging problem for many machine learning and data mining applications is that the amount of data and features are very large, so that low-rank approximations of original data are often required for efficient computation. We propose new multi-level clustering based low-rank matrix approximations which are comparable and even more compact than Singular Value Decomposition (SVD). We utilize the cluster indicators of data clustering results to form the subspaces, hence our decomposition results are more interpretable. We further generalize our clustering based matrix decompositions to tensor decompositions that are useful in high-order data analysis. We also provide an upper bound for the approximation error of our tensor decomposition algorithm. In all experimental results, our methods significantly outperform traditional decomposition methods such as SVD and high-order SVD.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Breiding, Paul, and Nick Vannieuwenhoven. "On the average condition number of tensor rank decompositions." IMA Journal of Numerical Analysis 40, no. 3 (June 20, 2019): 1908–36. http://dx.doi.org/10.1093/imanum/drz026.

Повний текст джерела
Анотація:
Abstract We compute the expected value of powers of the geometric condition number of random tensor rank decompositions. It is shown in particular that the expected value of the condition number of $n_1\times n_2 \times 2$ tensors with a random rank-$r$ decomposition, given by factor matrices with independent and identically distributed standard normal entries, is infinite. This entails that it is expected and probable that such a rank-$r$ decomposition is sensitive to perturbations of the tensor. Moreover, it provides concrete further evidence that tensor decomposition can be a challenging problem, also from the numerical point of view. On the other hand, we provide strong theoretical and empirical evidence that tensors of size $n_1~\times ~n_2~\times ~n_3$ with all $n_1,n_2,n_3 \geqslant 3$ have a finite average condition number. This suggests that there exists a gap in the expected sensitivity of tensors between those of format $n_1\times n_2 \times 2$ and other order-3 tensors. To establish these results we show that a natural weighted distance from a tensor rank decomposition to the locus of ill-posed decompositions with an infinite geometric condition number is bounded from below by the inverse of this condition number. That is, we prove one inequality towards a so-called condition number theorem for the tensor rank decomposition.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Alhamadi, Khaled M., Fahmi Yaseen Qasem, and Meqdad Ahmed Ali. "Different types of decomposition for certain tensors in \(K^h-BR-F_n\) and \(K^h-BR\)-affinely connected space." University of Aden Journal of Natural and Applied Sciences 20, no. 2 (August 31, 2016): 355–63. http://dx.doi.org/10.47372/uajnas.2016.n2.a10.

Повний текст джерела
Анотація:
In this paper we defined \(K^h\)-birecurrent space which is characterized by the condition\(K_jkh|m|l^i=a_lm K_jkh^i\) , \(K_jkh^i≠0\), also we introduced some decompositions of Cartan's fourth and third curvature tensor and Berwald curvature tensor and its torsion tensor. The aim of this paper is devoted to the discussion of decomposition for different tensors in \(K^h\)-birecurrent space and \(K^h\)-birecurrent affinely connected space and the decomposition of curvature tensor Cartan's fourth and third in \(K^h\)-birecurrent space, also the decomposition of curvature tensor of Berwald in \(K^h\)-birecurrent affinely connected space, various results, formulas, theorems and different identities have been obtained.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Hameduddin, Ismail, Charles Meneveau, Tamer A. Zaki, and Dennice F. Gayme. "Geometric decomposition of the conformation tensor in viscoelastic turbulence." Journal of Fluid Mechanics 842 (March 12, 2018): 395–427. http://dx.doi.org/10.1017/jfm.2018.118.

Повний текст джерела
Анотація:
This work introduces a mathematical approach to analysing the polymer dynamics in turbulent viscoelastic flows that uses a new geometric decomposition of the conformation tensor, along with associated scalar measures of the polymer fluctuations. The approach circumvents an inherent difficulty in traditional Reynolds decompositions of the conformation tensor: the fluctuating tensor fields are not positive definite and so do not retain the physical meaning of the tensor. The geometric decomposition of the conformation tensor yields both mean and fluctuating tensor fields that are positive definite. The fluctuating tensor in the present decomposition has a clear physical interpretation as a polymer deformation relative to the mean configuration. Scalar measures of this fluctuating conformation tensor are developed based on the non-Euclidean geometry of the set of positive definite tensors. Drag-reduced viscoelastic turbulent channel flow is then used an example case study. The conformation tensor field, obtained using direct numerical simulations, is analysed using the proposed framework.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Obster, Dennis, and Naoki Sasakura. "Counting Tensor Rank Decompositions." Universe 7, no. 8 (August 15, 2021): 302. http://dx.doi.org/10.3390/universe7080302.

Повний текст джерела
Анотація:
Tensor rank decomposition is a useful tool for geometric interpretation of the tensors in the canonical tensor model (CTM) of quantum gravity. In order to understand the stability of this interpretation, it is important to be able to estimate how many tensor rank decompositions can approximate a given tensor. More precisely, finding an approximate symmetric tensor rank decomposition of a symmetric tensor Q with an error allowance Δ is to find vectors ϕi satisfying ∥Q−∑i=1Rϕi⊗ϕi⋯⊗ϕi∥2≤Δ. The volume of all such possible ϕi is an interesting quantity which measures the amount of possible decompositions for a tensor Q within an allowance. While it would be difficult to evaluate this quantity for each Q, we find an explicit formula for a similar quantity by integrating over all Q of unit norm. The expression as a function of Δ is given by the product of a hypergeometric function and a power function. By combining new numerical analysis and previous results, we conjecture a formula for the critical rank, yielding an estimate for the spacetime degrees of freedom of the CTM. We also extend the formula to generic decompositions of non-symmetric tensors in order to make our results more broadly applicable. Interestingly, the derivation depends on the existence (convergence) of the partition function of a matrix model which previously appeared in the context of the CTM.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Adkins, William A. "Primary decomposition of torsionR[X]-modules." International Journal of Mathematics and Mathematical Sciences 17, no. 1 (1994): 41–46. http://dx.doi.org/10.1155/s0161171294000074.

Повний текст джерела
Анотація:
This paper is concerned with studying hereditary properties of primary decompositions of torsionR[X]-modulesMwhich are torsion free asR-modules. Specifically, if anR[X]-submodule ofMis pure as anR-submodule, then the primary decomposition ofMdetermines a primary decomposition of the submodule. This is a generalization of the classical fact from linear algebra that a diagonalizable linear transformation on a vector space restricts to a diagonalizable linear transformation of any invariant subspace. Additionally, primary decompositions are considered under direct sums and tensor product.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Cai, Yunfeng, and Ping Li. "A Blind Block Term Decomposition of High Order Tensors." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6868–76. http://dx.doi.org/10.1609/aaai.v35i8.16847.

Повний текст джерела
Анотація:
Tensor decompositions have found many applications in signal processing, data mining, machine learning, etc. In particular, the block term decomposition (BTD), which is a generalization of CP decomposition and Tucker decomposition/HOSVD, has been successfully used for the compression and acceleration of neural networks. However, computing BTD is NP-hard, and optimization based methods usually suffer from slow convergence or even fail to converge, which limits the applications of BTD. This paper considers a “blind” block term decomposition (BBTD) of high order tensors, in which the block diagonal structure of the core tensor is unknown. Our contributions include: 1) We establish the necessary and sufficient conditions for the existence of BTD, characterize the condition when a BTD solves the BBTD problem, and show that the BBTD is unique under a “low rank” assumption. 2) We propose an algebraic method to compute the BBTD. This method transforms the problem of determining the block diagonal structure of the core tensor into a clustering problem of complex numbers, in polynomial time. And once the clustering problem is solved, the BBTD can be obtained via computing several matrix decompositions. Numerical results show that our method is able to compute the BBTD, even in the presence of noise to some extent, whereas optimization based methods (e.g., MINF and NLS in TENSORLAB) may fail to converge.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hauenstein, Jonathan D., Luke Oeding, Giorgio Ottaviani, and Andrew J. Sommese. "Homotopy techniques for tensor decomposition and perfect identifiability." Journal für die reine und angewandte Mathematik (Crelles Journal) 2019, no. 753 (August 1, 2019): 1–22. http://dx.doi.org/10.1515/crelle-2016-0067.

Повний текст джерела
Анотація:
AbstractLetTbe a general complex tensor of format{(n_{1},\dots,n_{d})}. When the fraction{\prod_{i}n_{i}/[1+\sum_{i}(n_{i}-1)]}is an integer, and a natural inequality (called balancedness) is satisfied, it is expected thatThas finitely many minimal decomposition as a sum of decomposable tensors. We show how homotopy techniques allow us to find all the decompositions ofT, starting from a given one. Computationally, this gives a guess regarding the total number of such decompositions. This guess matches exactly with all cases previously known, and predicts several unknown cases. Some surprising experiments yielded two new cases of generic identifiability: formats{(3,4,5)}and{(2,2,2,3)}which have a unique decomposition as the sum of six and four decomposable tensors, respectively. We conjecture that these two cases together with the classically known matrix pencils are the only cases where generic identifiability holds, i.e., the onlyidentifiablecases. Building on the computational experiments, we use algebraic geometry to prove these two new cases are indeed generically identifiable.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhu, Ben-Chao, and Xiang-Song Chen. "Tensor gauge condition and tensor field decomposition." Modern Physics Letters A 30, no. 35 (October 28, 2015): 1550192. http://dx.doi.org/10.1142/s0217732315501928.

Повний текст джерела
Анотація:
We discuss various proposals of separating a tensor field into pure-gauge and gauge-invariant components. Such tensor field decomposition is intimately related to the effort of identifying the real gravitational degrees of freedom out of the metric tensor in Einstein’s general relativity. We show that as for a vector field, the tensor field decomposition has exact correspondence to and can be derived from the gauge-fixing approach. The complication for the tensor field, however, is that there are infinitely many complete gauge conditions in contrast to the uniqueness of Coulomb gauge for a vector field. The cause of such complication, as we reveal, is the emergence of a peculiar gauge-invariant pure-gauge construction for any gauge field of spin [Formula: see text]. We make an extensive exploration of the complete tensor gauge conditions and their corresponding tensor field decompositions, regarding mathematical structures, equations of motion for the fields and nonlinear properties. Apparently, no single choice is superior in all aspects, due to an awkward fact that no gauge-fixing can reduce a tensor field to be purely dynamical (i.e. transverse and traceless), as can the Coulomb gauge in a vector case.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Sutcliffe, S. "Spectral Decomposition of the Elasticity Tensor." Journal of Applied Mechanics 59, no. 4 (December 1, 1992): 762–73. http://dx.doi.org/10.1115/1.2894040.

Повний текст джерела
Анотація:
The elasticity tensor in anisotropic elasticity can be regarded as a symmetric linear transformation on the nine-dimensional space of second-order tensors. This allows the elasticity tensor to be expressed in terms of its spectral decomposition. The structures of the spectral decompositions are determined by the sets of invariant subspaces that are consistent with material symmetry. Eigenvalues always depend on the values of the elastic constants, but the eigenvectors are, in part, independent of these values. The structures of the spectral decompositions are presented for the classical symmetry groups of crystallography, and numerical results are presented for representative materials in each group. Spectral forms for the equilibrium equations, the acoustic tensor, and the stored energy function are also derived.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Ghosh, Debraj, and Anup Suryawanshi. "Approximation of Spatio-Temporal Random Processes Using Tensor Decomposition." Communications in Computational Physics 16, no. 1 (July 2014): 75–95. http://dx.doi.org/10.4208/cicp.201112.191113a.

Повний текст джерела
Анотація:
AbstractA new representation of spatio-temporal random processes is proposed in this work. In practical applications, such processes are used to model velocity fields, temperature distributions, response of vibrating systems, to name a few. Finding an efficient representation for any random process leads to encapsulation of information which makes it more convenient for a practical implementations, for instance, in a computational mechanics problem. For a single-parameter process such as spatial or temporal process, the eigenvalue decomposition of the covariance matrix leads to the well-known Karhunen-Loève (KL) decomposition. However, for multiparameter processes such as a spatio-temporal process, the covariance function itself can be defined in multiple ways. Here the process is assumed to be measured at a finite set of spatial locations and a finite number of time instants. Then the spatial covariance matrix at different time instants are considered to define the covariance of the process. This set of square, symmetric, positive semi-definite matrices is then represented as a third-order tensor. A suitable decomposition of this tensor can identify the dominant components of the process, and these components are then used to define a closed-form representation of the process. The procedure is analogous to the KL decomposition for a single-parameter process, however, the decompositions and interpretations vary significantly. The tensor decompositions are successfully applied on (i) a heat conduction problem, (ii) a vibration problem, and (iii) a covariance function taken from the literature that was fitted to model a measured wind velocity data. It is observed that the proposed representation provides an efficient approximation to some processes. Furthermore, a comparison with KL decomposition showed that the proposed method is computationally cheaper than the KL, both in terms of computer memory and execution time.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Sucharitha, B., and Dr K. Anitha Sheela. "Compression of Hyper Spectral Images using Tensor Decomposition Methods." International Journal of Circuits, Systems and Signal Processing 16 (October 7, 2022): 1148–55. http://dx.doi.org/10.46300/9106.2022.16.138.

Повний текст джерела
Анотація:
Tensor decomposition methods have beenrecently identified as an effective approach for compressing high-dimensional data. Tensors have a wide range of applications in numerical linear algebra, chemo metrics, data mining, signal processing, statics, and data mining and machine learning. Due to the huge amount of information that the hyper spectral images carry, they require more memory to store, process and send. We need to compress the hyper spectral images in order to reduce storage and processing costs. Tensor decomposition techniques can be used to compress the hyper spectral data. The primary objective of this work is to utilize tensor decomposition methods to compress the hyper spectral images. This paper explores three types of tensor decompositions: Tucker Decomposition (TD_ALS), CANDECOMP/PARAFAC (CP) and Tucker_HOSVD (Higher order singular value Decomposition) and comparison of these methods experimented on two real hyper spectral images: the Salinas image (512 x 217 x 224) and Indian Pines corrected (145 x 145 x 200). The PSNR and SSIM are used to evaluate how well these techniques work. When compared to the iterative approximation methods employed in the CP and Tucker_ALS methods, the Tucker_HOSVD method decomposes the hyper spectral image into core and component matrices more quickly. According to experimental analysis, Tucker HOSVD's reconstruction of the image preserves image quality while having a higher compression ratio than the other two techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

CHODA, MARIE. "VON NEUMANN ENTROPY AND RELATIVE POSITION BETWEEN SUBALGEBRAS." International Journal of Mathematics 24, no. 08 (July 2013): 1350066. http://dx.doi.org/10.1142/s0129167x13500663.

Повний текст джерела
Анотація:
In order to give numerical characterizations of the notion of "mutual orthogonality", we introduce two kinds of family of positive definite matrices for a unitary u in a finite von Neumann algebra M. They are arising from u naturally depending on the decompositions of M. One corresponds to the tensor product decomposition and the other does to the crossed product decomposition. By using the von Neumann entropy for these positive definite matrices, we characterize the notion of mutual orthogonality between subalgebras.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Sobolev, Konstantin, Dmitry Ermilov, Anh-Huy Phan, and Andrzej Cichocki. "PARS: Proxy-Based Automatic Rank Selection for Neural Network Compression via Low-Rank Weight Approximation." Mathematics 10, no. 20 (October 14, 2022): 3801. http://dx.doi.org/10.3390/math10203801.

Повний текст джерела
Анотація:
Low-rank matrix/tensor decompositions are promising methods for reducing the inference time, computation, and memory consumption of deep neural networks (DNNs). This group of methods decomposes the pre-trained neural network weights through low-rank matrix/tensor decomposition and replaces the original layers with lightweight factorized layers. A main drawback of the technique is that it demands a great amount of time and effort to select the best ranks of tensor decomposition for each layer in a DNN. This paper proposes a Proxy-based Automatic tensor Rank Selection method (PARS) that utilizes a Bayesian optimization approach to find the best combination of ranks for neural network (NN) compression. We observe that the decomposition of weight tensors adversely influences the feature distribution inside the neural network and impairs the predictability of the post-compression DNN performance. Based on this finding, a novel proxy metric is proposed to deal with the abovementioned issue and to increase the quality of the rank search procedure. Experimental results show that PARS improves the results of existing decomposition methods on several representative NNs, including ResNet-18, ResNet-56, VGG-16, and AlexNet. We obtain a 3× FLOP reduction with almost no loss of accuracy for ILSVRC-2012ResNet-18 and a 5.5× FLOP reduction with an accuracy improvement for ILSVRC-2012 VGG-16.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Nivetha, S., and V. Maheswari. "Isolate Domination Decomposition of Tensor Product of Cycle Related Graphs." Indian Journal Of Science And Technology 17, no. 30 (August 2, 2024): 3145–54. http://dx.doi.org/10.17485/ijst/v17i30.1796.

Повний текст джерела
Анотація:
Objectives: This study deals about tensor product of some cycle related graphs which admits an IDD. Methods: We have a decomposed a graph into parts in such a way that the isolate domination number of partitions ranges from to . We have used basic terms and propositions of isolate domination over the graph in order to obtain the results. Findings: We have introduced Isolate Domination Decomposition (IDD) of Graphs 1 and is defined as a collection of subgraphs of such that every edge of belongs to exactly one each is connected and it contains atleast one edge and . Also we have found the range of vertices for a graph under which the conditions of IDD are satisfied along with the converse part. Novelty: Domination and Decomposition are widely used in networking, block design, coding theory and many fields. Motivated by the concept of ascending pendant domination and decomposition 2, 3, we have used here the isolate domination combined with decomposition to characterize the graphs which admits this new parameter and to investigate their vertex bounds. Keywords: Dominating Set, Domination Number, Isolate Dominating Set, Decomposition, Isolate Domination Decomposition, Tensor Product, Cycle Related Graph
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Paúl, Pedro J., Carmen Sáez, and Juan M. Virués. "Locally Convex Spaces with Toeplitz Decompositions." Journal of the Australian Mathematical Society. Series A. Pure Mathematics and Statistics 68, no. 1 (February 2000): 19–40. http://dx.doi.org/10.1017/s1446788700001555.

Повний текст джерела
Анотація:
AbstractA Toeplitz decomposition of a locally convez space E into subspaces (Ek) with continuous projections (Pk) is a decomposition of every x ∈ E as x = ΣkPkx where ordinary summability has been replaced by summability with respect to an infinite and row-finite matrix. We extend to the setting of Toeplitz decompositions a number of results about the locally convex structure of a space with a Schauder decomposition. Namely, we give some necessary or sufficient conditions for being reflexive, a Montel space or a Schwartz space. Roughly speaking, each of these locally convex properties is linked to a property of the convergence of the decomposition. We apply these results to study some structural questions in projective tensor products and spaces with Cesàro bases.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Britten, D. J., J. Hooper, and F. W. Lemire. "Simple Cn modules with multiplicities 1 and applications." Canadian Journal of Physics 72, no. 7-8 (July 1, 1994): 326–35. http://dx.doi.org/10.1139/p94-048.

Повний текст джерела
Анотація:
In this paper we show that there exist exactly two nonequivalent simple infinite dimensional highest weight Cn modules having the property that every weight space is one dimensional. The tensor products of these modules with any finite-dimensional simple Cn module are proven to be completely reducible and we provide an explicit decomposition for such tensor products. As an application of these decompositions, we obtain two recursion formulas for computing the multiplicities of simple finite dimensional Cn modules. These formulas involve a sum over subgroups of index 2 in the Weyl group of Cn.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Kountchev, Roumen, Rumen Mironov, and Roumiana Kountcheva. "Complexity Estimation of Cubical Tensor Represented through 3D Frequency-Ordered Hierarchical KLT." Symmetry 12, no. 10 (September 26, 2020): 1605. http://dx.doi.org/10.3390/sym12101605.

Повний текст джерела
Анотація:
In this work is introduced one new hierarchical decomposition for cubical tensor of size 2n, based on the well-known orthogonal transforms Principal Component Analysis and Karhunen–Loeve Transform. The decomposition is called 3D Frequency-Ordered Hierarchical KLT (3D-FOHKLT). It is separable, and its calculation is based on the one-dimensional Frequency-Ordered Hierarchical KLT (1D-FOHKLT) applied on a sequence of matrices. The transform matrix is the product of n sparse matrices, symmetrical at the point of their main diagonal. In particular, for the case in which the angles which define the transform coefficients for the couples of matrices in each hierarchical level of 1D-FOHKLT are equal to π/4, the transform coincides with this of the frequency-ordered 1D Walsh–Hadamard. Compared to the hierarchical decompositions of Tucker (H-Tucker) and the Tensor-Train (TT), the offered approach does not ensure full decorrelation between its components, but is close to the maximum. On the other hand, the evaluation of the computational complexity (CC) of the new decomposition proves that it is lower than that of the above-mentioned similar approaches. In correspondence with the comparison results for H-Tucker and TT, the CC decreases fast together with the increase of the hierarchical levels’ number, n. An additional advantage of 3D-FOHKLT is that it is based on the use of operations of low complexity, while the similar famous decompositions need large numbers of iterations to achieve the coveted accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Goodbrake, Christian, Alain Goriely, and Arash Yavari. "The mathematical foundations of anelasticity: existence of smooth global intermediate configurations." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 477, no. 2245 (January 2021): 20200462. http://dx.doi.org/10.1098/rspa.2020.0462.

Повний текст джерела
Анотація:
A central tool of nonlinear anelasticity is the multiplicative decomposition of the deformation tensor that assumes that the deformation gradient can be decomposed as a product of an elastic and an anelastic tensor. It is usually justified by the existence of an intermediate configuration. Yet, this configuration cannot exist in Euclidean space, in general, and the mathematical basis for this assumption is on unsatisfactory ground. Here, we derive a sufficient condition for the existence of global intermediate configurations, starting from a multiplicative decomposition of the deformation gradient. We show that these global configurations are unique up to isometry. We examine the result of isometrically embedding these configurations in higher-dimensional Euclidean space, and construct multiplicative decompositions of the deformation gradient reflecting these embeddings. As an example, for a family of radially symmetric deformations, we construct isometric embeddings of the resulting intermediate configurations, and compute the residual stress fields explicitly.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Yu, Seungmin, Hayun Lee, and Dongkun Shin. "Optimizing Computation of Tensor-Train Decomposed Embedding Layer." Journal of KIISE 50, no. 9 (September 30, 2023): 729–36. http://dx.doi.org/10.5626/jok.2023.50.9.729.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Caban, Paweł, Krzysztof Podlaski, Jakub Rembieliński, Kordian A. Smoliński, and Zbigniew Walczak. "Tensor Product Decomposition, Entanglement, and Bogoliubov Transformations for Two Fermion System." Open Systems & Information Dynamics 12, no. 02 (June 2005): 179–88. http://dx.doi.org/10.1007/s11080-005-5729-8.

Повний текст джерела
Анотація:
We consider the two-fermion system whose states are subjected to the superselection rule forbidding the superposition of states with fermionic and bosonic statistics. This implies that separable states are described only by diagonal density matrices. Moreover, we find the explicit formula for the entanglement of formation, which in this case cannot be calculated properly using Wootters's concurrence. We also discuss the problem of the choice of tensor product decomposition in a system of two fermions with the help of Bogoliubov transformations of creation and annihilation operators. Finally, we show that there exist states which are separable with respect to all tensor product decompositions of the underlying Hilbert space.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Watanabe, N., A. Ishida, J. Murakami, and N. Yamamoto. "Solar Radiation and Weather Analysis of Meteorological Satellite Data by Tensor Decomposition." Journal of Image and Graphics 11, no. 3 (September 2023): 271–81. http://dx.doi.org/10.18178/joig.11.3.271-281.

Повний текст джерела
Анотація:
In this study, the data obtained from meteorological satellites were analyzed using tensor decomposition. The data used in this paper are meteorological image data observed by the Himawari-8 satellite and solar radiation data generated from Himawari Standard Data. First, we applied Higher-Order Singular Value Decomposition (HOSVD), a type of tensor decomposition, to the original image data and analyzed the features of the data, called the core tensor, obtained from the decomposition. As a result, it was found that the maximum value of the core tensor element is related to the cloud cover in the observed area. We then applied Multidimensional Principal Component Analysis (MPCA), an extension of principal component analysis computed using HOSVD, to the solar radiation data and analyzed the Principal Components (PC) obtained from MPCA. We also found that the PC with the highest contribution rate is related to the solar radiation in the entire observation area. The resulting PC score was compared to actual weather data. From the result, it was confirmed that the temporal transition of the amount of solar radiation in this area can be expressed almost correctly by using the PC score.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Elisei-Iliescu, Camelia, Laura-Maria Dogariu, Constantin Paleologu, Jacob Benesty, Andrei-Alexandru Enescu, and Silviu Ciochină. "A Recursive Least-Squares Algorithm for the Identification of Trilinear Forms." Algorithms 13, no. 6 (June 1, 2020): 135. http://dx.doi.org/10.3390/a13060135.

Повний текст джерела
Анотація:
High-dimensional system identification problems can be efficiently addressed based on tensor decompositions and modelling. In this paper, we design a recursive least-squares (RLS) algorithm tailored for the identification of trilinear forms, namely RLS-TF. In our framework, the trilinear form is related to the decomposition of a third-order tensor (of rank one). The proposed RLS-TF algorithm acts on the individual components of the global impulse response, thus being efficient in terms of both performance and complexity. Simulation results indicate that the proposed solution outperforms the conventional RLS algorithm (which handles only the global impulse response), but also the previously developed trilinear counterparts based on the least-mean- squares algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Rawat, K. S., and Sandeep Chauhan. "Study on Einstein-Sasakian Decomposable Recurrent Space of First Order." Journal of the Tensor Society 12, no. 01 (June 30, 2009): 85–92. http://dx.doi.org/10.56424/jts.v12i01.10589.

Повний текст джерела
Анотація:
Takano [2] have studied decomposition of curvature tensor in a recurrent space. Sinha and Singh [3] have been studied and defined decomposition of recurrent curvature tensor field in a Finsler space. Singh and Negi studied decomposition of recurrent curvature tensor field in a K¨aehlerian space. Negi and Rawat [6] have studied decomposition of recurrent curvature tensor field in K¨aehlerian space. Rawat and Silswal [11] studied and defined decomposition of recurrent curvature tensor fields in a Tachibana space. Rawat and Kunwar Singh [12] studied the decomposition of curvature tensor field in K¨aehlerian recurrent space of first order. Further, Rawat and Chauhan [23] studied the decomposition of curvature tensor field in Einstein- K¨aehlerian recurrent spaceof first order. In the present paper, we have studied the decomposition of curvature tensor fields R^h_{ijk} in terms of two non-zero vectors and a tensor field in EinsteinSasakian recurrent space of first order and several theorem have been established and proved.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Benesty, Jacob, Constantin Paleologu, Cristian-Lucian Stanciu, Ruxandra-Liana Costea, Laura-Maria Dogariu, and Silviu Ciochină. "Wiener Filter Using the Conjugate Gradient Method and a Third-Order Tensor Decomposition." Applied Sciences 14, no. 6 (March 13, 2024): 2430. http://dx.doi.org/10.3390/app14062430.

Повний текст джерела
Анотація:
In linear system identification problems, the Wiener filter represents a popular tool and stands as an important benchmark. Nevertheless, it faces significant challenges when identifying long-length impulse responses. In order to address the related shortcomings, the solution presented in this paper is based on a third-order tensor decomposition technique, while the resulting sets of Wiener–Hopf equations are solved with the conjugate gradient (CG) method. Due to the decomposition-based approach, the number of coefficients (i.e., the parameter space of the filter) is greatly reduced, which results in operating with smaller data structures within the algorithm. As a result, improved robustness and accuracy can be achieved, especially in harsh scenarios (e.g., limited/incomplete sets of data and/or noisy conditions). Besides, the CG-based solution avoids matrix inversion operations, together with the related numerical and complexity issues. The simulation results are obtained in a network echo cancellation scenario and support the performance gain. In this context, the proposed iterative Wiener filter outperforms the conventional benchmark and also some previously developed counterparts that use matrix inversion or second-order tensor decompositions.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Mørup, Morten, Lars Kai Hansen, and Sidse M. Arnfred. "Algorithms for Sparse Nonnegative Tucker Decompositions." Neural Computation 20, no. 8 (August 2008): 2112–31. http://dx.doi.org/10.1162/neco.2008.11-06-407.

Повний текст джерела
Анотація:
There is a increasing interest in analysis of large-scale multiway data. The concept of multiway data refers to arrays of data with more than two dimensions, that is, taking the form of tensors. To analyze such data, decomposition techniques are widely used. The two most common decompositions for tensors are the Tucker model and the more restricted PARAFAC model. Both models can be viewed as generalizations of the regular factor analysis to data of more than two modalities. Nonnegative matrix factorization (NMF), in conjunction with sparse coding, has recently been given much attention due to its part-based and easy interpretable representation. While NMF has been extended to the PARAFAC model, no such attempt has been done to extend NMF to the Tucker model. However, if the tensor data analyzed are nonnegative, it may well be relevant to consider purely additive (i.e., nonnegative) Tucker decompositions). To reduce ambiguities of this type of decomposition, we develop updates that can impose sparseness in any combination of modalities, hence, proposed algorithms for sparse nonnegative Tucker decompositions (SN-TUCKER). We demonstrate how the proposed algorithms are superior to existing algorithms for Tucker decompositions when the data and interactions can be considered nonnegative. We further illustrate how sparse coding can help identify what model (PARAFAC or Tucker) is more appropriate for the data as well as to select the number of components by turning off excess components. The algorithms for SN-TUCKER can be downloaded from Mørup (2007).
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Shcherbakova, Elena M., Sergey A. Matveev, Alexander P. Smirnov, and Eugene E. Tyrtyshnikov. "Study of performance of low-rank nonnegative tensor factorization methods." Russian Journal of Numerical Analysis and Mathematical Modelling 38, no. 4 (August 1, 2023): 231–39. http://dx.doi.org/10.1515/rnam-2023-0018.

Повний текст джерела
Анотація:
Abstract In the present paper we compare two different iterative approaches to constructing nonnegative tensor train and Tucker decompositions. The first approach is based on idea of alternating projections and randomized sketching for factorization of tensors with nonnegative elements. This approach can be useful for both TT and Tucker formats. The second approach consists of two stages. At the first stage we find the unconstrained tensor train decomposition for the target array. At the second stage we use this initial approximation in order to fix it within moderate number of operations and obtain the factorization with nonnegative factors either in tensor train or Tucker model. We study the performance of these methods for both synthetic data and hyper-spectral image and demonstrate the clear advantage of the latter technique in terms of computational time and wider range of possible applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Rawat, K. S., and Sandeep Chauhan. "Study on Einstein-K ̈aehlerian Decomposable Recurrent Space of First Order." Journal of the Tensor Society 9, no. 01 (June 30, 2007): 45–51. http://dx.doi.org/10.56424/jts.v9i01.10567.

Повний текст джерела
Анотація:
Takano [2] have studied decomposition of curvature tensor in a recurrent space. Sinha and Singh [3] have been studied and defined decomposition of recurrent curvature tensor field in a Finsler space. Singh and Negi studied decomposition of recurrent curvature tensor field in a K¨aehlerian space. Negi and Rawat [6] have studied decomposition of recurrent curvature tensor field in K¨aehlerian space. Rawat and Silswal [11] studied and defined decomposition of recurrent curvature tensor fields in a Tachibana space. Further, Rawat and Kunwar Singh [12] studied the decomposition of curvature tensor field in K¨aehlerian recurrent space of first order. In the present paper, we have studied the decomposition of curvature tensor fields R^h_ijk in terms of two non-zero vectors and a tensor field in EinsteinK¨aehlerian recurrent space of first order and several theorem have been established and proved.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Chandru, K., and S. K. Narasimhamurthy. "The Study of Decomposition of Curvature Tensor Field in a Kaehlerian Recurrent Space of First Order." Journal of the Tensor Society 3, no. 00 (June 30, 2009): 11–18. http://dx.doi.org/10.56424/jts.v3i01.9967.

Повний текст джерела
Анотація:
Takano [2] have studied decomposition of curvature tensor in a recurrent space. Sinha and Singh [3] have been studied and defined decomposition of recurrent curvature tensor field in a Finsler space. Singh and Negi studied decomposition of recurrent curvature tensor field in a Kaehlerian space. Negi and Rawat [6] have studied decomposition of recurrent curvature tensor field in a Kaehlerian space. Rawat and Silswal [11] studied and defined decomposition of recurrent curvature tensor fields in a Tachibana space. In the present paper, we have studied the decomposition of curvature tensor fields in terms of two non-zero vectors and a tensor field in a Kaehlerian recurrent space of first order and several theorems have been established and proved. The relation between projective curvature tensor and Riemannian curvature tensor is established therein
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Chandru, K., and S. K. Narasimhamurthy. "The Study of Decomposition of Curvature Tensor Field in a Kaehlerian Recurrent Space of First Order." Journal of the Tensor Society 3, no. 01 (June 30, 2009): 11–18. http://dx.doi.org/10.56424/jts.v3i00.9967.

Повний текст джерела
Анотація:
Takano [2] have studied decomposition of curvature tensor in a recurrent space. Sinha and Singh [3] have been studied and defined decomposition of recurrent curvature tensor field in a Finsler space. Singh and Negi studied decomposition of recurrent curvature tensor field in a Kaehlerian space. Negi and Rawat [6] have studied decomposition of recurrent curvature tensor field in a Kaehlerian space. Rawat and Silswal [11] studied and defined decomposition of recurrent curvature tensor fields in a Tachibana space. In the present paper, we have studied the decomposition of curvature tensor fields in terms of two non-zero vectors and a tensor field in a Kaehlerian recurrent space of first order and several theorems have been established and proved. The relation between projective curvature tensor and Riemannian curvature tensor is established therein
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Longworth, C. E., W. D. Marslen-Wilson, B. Randall, and L. K. Tyler. "Getting to the Meaning of the Regular Past Tense: Evidence from Neuropsychology." Journal of Cognitive Neuroscience 17, no. 7 (July 2005): 1087–97. http://dx.doi.org/10.1162/0898929054475109.

Повний текст джерела
Анотація:
Neuropsychological impairments of English past tense processing inform a key debate in cognitive neuroscience concerning the nature of mental mechanisms. Dual-route accounts claim that regular past tense comprehension deficits reflect a specific impairment of morphological decomposition (e.g., jump + ed), disrupting the automatic comprehension of word meaning accessed via the verb stem (e.g., jump). Single-mechanism accounts claim that the deficits reflect a general phonological impairment that affects perception of regular past tense offsets but which might preserve normal activation of verb semantics. We tested four patients with regular past tense deficits and matched controls, using a paired auditory semantic priming/lexical decision task with three conditions: uninflected verbs (hope/wish), regular past tense primes (blamed/accuse), and irregular past tense primes (shook/tremble). Both groups showed significant priming for verbs with simple morphophonology (uninflected verbs and irregular past tenses) but the patients showed no priming for verbs with complex morphophonology (regular past tenses) in contrast to controls. The findings suggest that the patients are delayed in activating the meaning of verbs if a regular past tense affix is appended, consistent with a dual-route account of their deficit.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Oseledets, I. V. "Tensor-Train Decomposition." SIAM Journal on Scientific Computing 33, no. 5 (January 2011): 2295–317. http://dx.doi.org/10.1137/090752286.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Brachat, Jerome, Pierre Comon, Bernard Mourrain, and Elias Tsigaridas. "Symmetric tensor decomposition." Linear Algebra and its Applications 433, no. 11-12 (December 2010): 1851–72. http://dx.doi.org/10.1016/j.laa.2010.06.046.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

El-Mesady, Ahmed, Aleksandr Y. Romanov, Aleksandr A. Amerikanov, and Alexander D. Ivannikov. "On Bipartite Circulant Graph Decompositions Based on Cartesian and Tensor Products with Novel Topologies and Deadlock-Free Routing." Algorithms 16, no. 1 (December 22, 2022): 10. http://dx.doi.org/10.3390/a16010010.

Повний текст джерела
Анотація:
Recent developments in commutative algebra, linear algebra, and graph theory allow us to approach various issues in several fields. Circulant graphs now have a wider range of practical uses, including as the foundation for optical networks, discrete cellular neural networks, small-world networks, models of chemical reactions, supercomputing and multiprocessor systems. Herein, we are concerned with the decompositions of the bipartite circulant graphs. We propose the Cartesian and tensor product approaches as helping tools for the decompositions. The proposed approaches enable us to decompose the bipartite circulant graphs into many categories of graphs. We consider the use cases of applying the described theory of bipartite circulant graph decomposition to the problems of finding new topologies and deadlock-free routing in them when building supercomputers and networks-on-chip.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Yang, Hye-Kyung, and Hwan-Seung Yong. "Multi-Aspect Incremental Tensor Decomposition Based on Distributed In-Memory Big Data Systems." Journal of Data and Information Science 5, no. 2 (May 20, 2020): 13–32. http://dx.doi.org/10.2478/jdis-2020-0010.

Повний текст джерела
Анотація:
AbstractPurposeWe propose InParTen2, a multi-aspect parallel factor analysis three-dimensional tensor decomposition algorithm based on the Apache Spark framework. The proposed method reduces re-decomposition cost and can handle large tensors.Design/methodology/approachConsidering that tensor addition increases the size of a given tensor along all axes, the proposed method decomposes incoming tensors using existing decomposition results without generating sub-tensors. Additionally, InParTen2 avoids the calculation of Khari–Rao products and minimizes shuffling by using the Apache Spark platform.FindingsThe performance of InParTen2 is evaluated by comparing its execution time and accuracy with those of existing distributed tensor decomposition methods on various datasets. The results confirm that InParTen2 can process large tensors and reduce the re-calculation cost of tensor decomposition. Consequently, the proposed method is faster than existing tensor decomposition algorithms and can significantly reduce re-decomposition cost.Research limitationsThere are several Hadoop-based distributed tensor decomposition algorithms as well as MATLAB-based decomposition methods. However, the former require longer iteration time, and therefore their execution time cannot be compared with that of Spark-based algorithms, whereas the latter run on a single machine, thus limiting their ability to handle large data.Practical implicationsThe proposed algorithm can reduce re-decomposition cost when tensors are added to a given tensor by decomposing them based on existing decomposition results without re-decomposing the entire tensor.Originality/valueThe proposed method can handle large tensors and is fast within the limited-memory framework of Apache Spark. Moreover, InParTen2 can handle static as well as incremental tensor decomposition.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Negi, U. S., Preeti Chauhan, and Sulochana. "Decomposition of Riemannian Recurrent Curvature Tensor Manifolds of First Order." Journal of Nepal Mathematical Society 5, no. 2 (December 20, 2022): 65–71. http://dx.doi.org/10.3126/jnms.v5i2.50082.

Повний текст джерела
Анотація:
Takano [6] premeditated decomposition of curvature tensor in a recurrent Riemannian space. After that, Negi and Bisht [3] defined and deliberated decomposition of recurrent curvature tensor fields in a Kaehlerian manifolds of first order. We have calculated the decomposition of Riemannian recurrent curvature tensor manifolds of first order and some theorems established using the decomposition tensor field.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Gao, X. L., and Y. Q. Li. "The upper triangular decomposition of the deformation gradient: possible decompositions of the distortion tensor." Acta Mechanica 229, no. 5 (December 23, 2017): 1927–48. http://dx.doi.org/10.1007/s00707-017-2075-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Ouerfelli, Mohamed, Mohamed Tamaazousti, and Vincent Rivasseau. "Random Tensor Theory for Tensor Decomposition." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7913–21. http://dx.doi.org/10.1609/aaai.v36i7.20761.

Повний текст джерела
Анотація:
We propose a new framework for tensor decomposition based on trace invariants, which are particular cases of tensor networks. In general, tensor networks are diagrams/graphs that specify a way to "multiply" a collection of tensors together to produce another tensor, matrix or scalar. The particularity of trace invariants is that the operation of multiplying copies of a certain input tensor that produces a scalar obeys specific symmetry constraints. In other words, the scalar resulting from this multiplication is invariant under some specific transformations of the involved tensor. We focus our study on the O(N)-invariant graphs, i.e. invariant under orthogonal transformations of the input tensor. The proposed approach is novel and versatile since it allows to address different theoretical and practical aspects of both CANDECOMP/PARAFAC (CP) and Tucker decomposition models. In particular we obtain several results: (i) we generalize the computational limit of Tensor PCA (a rank-one tensor decomposition) to the case of a tensor with axes of different dimensions (ii) we introduce new algorithms for both decomposition models (iii) we obtain theoretical guarantees for these algorithms and (iv) we show improvements with respect to state of the art on synthetic and real data which also highlights a promising potential for practical applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Al-Qashbari, Adel Mohammed Ali, and Abdullah Saeed Abdullah Saeed. "Decomposition of Curvatur tensor filed \(R_jkh^i \) recurrent spaces of first and second order." University of Aden Journal of Natural and Applied Sciences 27, no. 2 (October 31, 2023): 281–89. http://dx.doi.org/10.47372/uajnas.2023.n2.a09.

Повний текст джерела
Анотація:
Finsler geometry has many uses in relative physics and many of mathematicians contributed in this study and improved it. Takano [26] has studied the decomposition of curvature tensor in a recurrent space. Sinha and Singh [25] have studied and defined the decomposition of recurrent curvature tensor field in a Finsler space. Negi and Rawat [11] and [12] have studied decomposition of recurrent curvature tensor fields in K¨aehlerian space. Rawat and Silswal [19] studied and defined the decomposition of recurrent curvature tensor fields in a Tachibana space. Rawat and Singh [21] studied the decomposition of curvature tensor field in K¨aehlerian recurrent space of first order. Further, Rawat and others [20],[22] and [23] studied the decomposition of curvature tensor field in Einstein- K¨aehlerian recurrent space of first order. Al-Qashbari [1], [2], [3] and [4] and Qasem and others [14], [15], [16], [17] and [18] studied the recurrent for different curvature tensors .In the present paper, we have studied the decomposition of curvature tensor fields \(R_jkh^i\) in recurrent space of First order and second order, and several theorems have been established and proved.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Sorber, Laurent, Marc Van Barel, and Lieven De Lathauwer. "Optimization-Based Algorithms for Tensor Decompositions: Canonical Polyadic Decomposition, Decomposition in Rank-$(L_r,L_r,1)$ Terms, and a New Generalization." SIAM Journal on Optimization 23, no. 2 (January 2013): 695–720. http://dx.doi.org/10.1137/120868323.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Cariello, Daniel. "Separability for weakly irreducible matrices." Quantum Information and Computation 14, no. 15&16 (November 2014): 1308–37. http://dx.doi.org/10.26421/qic14.15-16-4.

Повний текст джерела
Анотація:
This paper is devoted to the study of the separability problem in the field of Quantum information theory. We focus on the bipartite finite dimensional case and on two types of matrices: SPC and PPT matrices (see definitions 32 and 33). We prove that many results hold for both types. If these matrices have specific Hermitian Schmidt decompositions then they are separable in a very strong sense (see theorem 38 and corollary 39). We prove that both types have what we call \textbf{split decompositions} (see theorems 41 and 42). We also define the notion of weakly irreducible matrix (see definition 43), based on the concept of irreducible state defined recently in \cite{chen1}, \cite{chen} and \cite{chen2}.}{These split decomposition theorems imply that every SPC $($PPT$)$ matrix can be decomposed into a sum of $s+1$ SPC $($PPT$)$ matrices of which the first $s$ are weakly irreducible, by theorem 48, and the last one has a further split decomposition of lower tensor rank, by corollary 49. Thus the SPC $($PPT$)$ matrix is decomposed in a finite number of steps into a sum of weakly irreducible matrices. Different components of this sum have support on orthogonal local Hilbert spaces, therefore the matrix is separable if and only if each component is separable. This reduces the separability problem for SPC $($PPT$)$ matrices to the case of weakly irreducible SPC $($PPT$)$ matrices. We also provide a complete description of weakly irreducible matrices of both types (see theorem 46).}{Using the fact that every positive semidefinite Hermitian matrix with tensor rank 2 is separable (see theorem 58), we found sharp inequalites providing separability for both types (see theorems 61 and 62).
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Lilley, F. E. M. (Ted). "Magnetotelluric tensor decomposition: Part I, Theory for a basic procedure." GEOPHYSICS 63, no. 6 (November 1998): 1885–97. http://dx.doi.org/10.1190/1.1444481.

Повний текст джерела
Анотація:
The problem of expressing a general 3-D magnetotelluric (MT) impedance tensor in the form of a 2-D tensor that has been distorted in some way is addressed first in terms of a general theorem. This theorem shows that when the real and quadrature parts of a tensor are analyzed separately as distinct matrices, all that is necessary to make a matrix with 2-D characteristics from one with 3-D characteristics is to allow the electric and magnetic observing axes to rotate independently. The process is then examined in terms of the operations of twist and pure shear (“split”) on such matrices. Both of two basic sequences of split after twist, and twist after split, produce a typical 3-D matrix from one initially 1-D, with the parameters of split contributing 2-D characteristics to the final matrix. Taken in reverse, these sequences offer two basic paths for the decomposition of a 3-D matrix, and are seen to be linked to the initial theorem. The various operations on matrices are expressed diagrammatically using the Mohr circle construction, of which it is demonstrated two types are possible. Mohr circles of an observed MT tensor display all the information held by the tensor, and the two types of circle construction respectively make clear whether particular data are well suited to modeling by either split after twist, or twist after split. Generally, tensor decompositions may be displayed by charting their progress in Mohr space. The Mohr construction also displays the invariants of a tensor and shows that tensor decomposition can be viewed as a process of determining an appropriate set of invariants. An expectation that the origin of axes should be outside every circle categorizes as irregular any tensors which, in either the real or quadrature part, do not satisfy a [Formula: see text] criterion. The theory of the present paper applies equally to procedures for distorting 1-D and 2-D model calculations for the purpose of matching observed 3-D data.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Abe, Shota, Akio Ishida, Jun Murakami, and Naoki Yamamoto. "Development of Online Learning Materials for Tensor Data Processing Exercises." International Journal of Information and Education Technology 12, no. 3 (2022): 194–202. http://dx.doi.org/10.18178/ijiet.2022.12.3.1604.

Повний текст джерела
Анотація:
Tensor decomposition is used in a wide range of research fields; however, its theory is difficult to understand. Therefore, basic education is essential when using it in programming. Currently, there are few Japanese universities that provide education on tensor decomposition; however, some overseas universities have already conducted it, and online learning materials are also substantial. Therefore, in this paper, we have developed online learning materials for basics and programming exercises of higher-order singular value decomposition (HOSVD), which is one of tensor decomposition, for the purpose of increasing the learning materials for tensor decomposition education. Our learning material is created on Microsoft Teams, and students can access this material channel and work on exercises on demand while watching explanatory videos including CG animation. As a result of the trial of this learning material, it was found that the students who used it can generally understand the processes related to tensor decomposition and can perform basic programming of them.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Kolda, Tamara G. "Orthogonal Tensor Decompositions." SIAM Journal on Matrix Analysis and Applications 23, no. 1 (January 2001): 243–55. http://dx.doi.org/10.1137/s0895479800368354.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Nie, Jiawang, and Zi Yang. "Hermitian Tensor Decompositions." SIAM Journal on Matrix Analysis and Applications 41, no. 3 (January 2020): 1115–44. http://dx.doi.org/10.1137/19m1306889.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Bigoni, Daniele, Allan P. Engsig-Karup, and Youssef M. Marzouk. "Spectral Tensor-Train Decomposition." SIAM Journal on Scientific Computing 38, no. 4 (January 2016): A2405—A2439. http://dx.doi.org/10.1137/15m1036919.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Erichson, N. Benjamin, Krithika Manohar, Steven L. Brunton, and J. Nathan Kutz. "Randomized CP tensor decomposition." Machine Learning: Science and Technology 1, no. 2 (June 1, 2020): 025012. http://dx.doi.org/10.1088/2632-2153/ab8240.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Sun, Will Wei, Junwei Lu, Han Liu, and Guang Cheng. "Provable sparse tensor decomposition." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 79, no. 3 (June 23, 2016): 899–916. http://dx.doi.org/10.1111/rssb.12190.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Tichavsky, Petr, Anh-Huy Phan, and Andrzej Cichocki. "Sensitivity in Tensor Decomposition." IEEE Signal Processing Letters 26, no. 11 (November 2019): 1653–57. http://dx.doi.org/10.1109/lsp.2019.2943060.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії