Добірка наукової літератури з теми "Rank-one tensors"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Rank-one tensors".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Rank-one tensors"

1

POPA, FLORIAN CATALIN, and OVIDIU TINTAREANU-MIRCEA. "IRREDUCIBLE KILLING TENSORS FROM THIRD RANK KILLING–YANO TENSORS." Modern Physics Letters A 22, no. 18 (June 14, 2007): 1309–17. http://dx.doi.org/10.1142/s0217732307023559.

Повний текст джерела
Анотація:
We investigate higher rank Killing–Yano tensors showing that third rank Killing–Yano tensors are not always trivial objects being possible to construct irreducible Killing tensors from them. We give as an example the Kimura IIC metric from two-rank Killing–Yano tensors to obtain a reducible Killing tensor and from third-rank Killing–Yano tensors, we obtain three Killing tensors, one reducible and two irreducible.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tyrtyshnikov, Eugene E. "Tensor decompositions and rank increment conjecture." Russian Journal of Numerical Analysis and Mathematical Modelling 35, no. 4 (August 26, 2020): 239–46. http://dx.doi.org/10.1515/rnam-2020-0020.

Повний текст джерела
Анотація:
AbstractSome properties of tensor ranks and the non-closeness issue of sets with given restrictions on the rank of tensors entering those sets are studied. It is proved that the rank of the d-dimensional Laplacian equals d. The following conjecture is formulated: for any tensor of non-maximal rank there exists a nonzero decomposable tensor (tensor of rank 1) such that the rank increases by one after adding this tensor. In the general case, it is proved that this property holds algebraically almost everywhere for complex tensors of fixed size whose rank is strictly less than the generic rank.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Zhang, Tong, and Gene H. Golub. "Rank-One Approximation to High Order Tensors." SIAM Journal on Matrix Analysis and Applications 23, no. 2 (January 2001): 534–50. http://dx.doi.org/10.1137/s0895479899352045.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Hu, Shenglong, Defeng Sun, and Kim-Chuan Toh. "Best Nonnegative Rank-One Approximations of Tensors." SIAM Journal on Matrix Analysis and Applications 40, no. 4 (January 2019): 1527–54. http://dx.doi.org/10.1137/18m1224064.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bachmayr, Markus, Wolfgang Dahmen, Ronald DeVore, and Lars Grasedyck. "Approximation of High-Dimensional Rank One Tensors." Constructive Approximation 39, no. 2 (November 12, 2013): 385–95. http://dx.doi.org/10.1007/s00365-013-9219-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Friedland, S., V. Mehrmann, R. Pajarola, and S. K. Suter. "On best rank one approximation of tensors." Numerical Linear Algebra with Applications 20, no. 6 (March 19, 2013): 942–55. http://dx.doi.org/10.1002/nla.1878.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Breiding, Paul, and Nick Vannieuwenhoven. "On the average condition number of tensor rank decompositions." IMA Journal of Numerical Analysis 40, no. 3 (June 20, 2019): 1908–36. http://dx.doi.org/10.1093/imanum/drz026.

Повний текст джерела
Анотація:
Abstract We compute the expected value of powers of the geometric condition number of random tensor rank decompositions. It is shown in particular that the expected value of the condition number of $n_1\times n_2 \times 2$ tensors with a random rank-$r$ decomposition, given by factor matrices with independent and identically distributed standard normal entries, is infinite. This entails that it is expected and probable that such a rank-$r$ decomposition is sensitive to perturbations of the tensor. Moreover, it provides concrete further evidence that tensor decomposition can be a challenging problem, also from the numerical point of view. On the other hand, we provide strong theoretical and empirical evidence that tensors of size $n_1~\times ~n_2~\times ~n_3$ with all $n_1,n_2,n_3 \geqslant 3$ have a finite average condition number. This suggests that there exists a gap in the expected sensitivity of tensors between those of format $n_1\times n_2 \times 2$ and other order-3 tensors. To establish these results we show that a natural weighted distance from a tensor rank decomposition to the locus of ill-posed decompositions with an infinite geometric condition number is bounded from below by the inverse of this condition number. That is, we prove one inequality towards a so-called condition number theorem for the tensor rank decomposition.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Grasedyck, Lars, and Wolfgang Hackbusch. "An Introduction to Hierarchical (H-) Rank and TT-Rank of Tensors with Examples." Computational Methods in Applied Mathematics 11, no. 3 (2011): 291–304. http://dx.doi.org/10.2478/cmam-2011-0016.

Повний текст джерела
Анотація:
Abstract We review two similar concepts of hierarchical rank of tensors (which extend the matrix rank to higher order tensors): the TT-rank and the H-rank (hierarchical or H-Tucker rank). Based on this notion of rank, one can define a data-sparse representation of tensors involving O(dnk + dk^3) data for order d tensors with mode sizes n and rank k. Simple examples underline the differences and similarities between the different formats and ranks. Finally, we derive rank bounds for tensors in one of the formats based on the ranks in the other format.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Krieg, David, and Daniel Rudolf. "Recovery algorithms for high-dimensional rank one tensors." Journal of Approximation Theory 237 (January 2019): 17–29. http://dx.doi.org/10.1016/j.jat.2018.08.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Milošević, Ivanka. "Second-rank tensors for quasi-one-dimensional systems." Physics Letters A 204, no. 1 (August 1995): 63–66. http://dx.doi.org/10.1016/0375-9601(95)00412-v.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Rank-one tensors"

1

Wang, Roy Chih Chung. "Adaptive Kernel Functions and Optimization Over a Space of Rank-One Decompositions." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36975.

Повний текст джерела
Анотація:
The representer theorem from the reproducing kernel Hilbert space theory is the origin of many kernel-based machine learning and signal modelling techniques that are popular today. Most kernel functions used in practical applications behave in a homogeneous manner across the domain of the signal of interest, and they are called stationary kernels. One open problem in the literature is the specification of a non-stationary kernel that is computationally tractable. Some recent works solve large-scale optimization problems to obtain such kernels, and they often suffer from non-identifiability issues in their optimization problem formulation. Many practical problems can benefit from using application-specific prior knowledge on the signal of interest. For example, if one can adequately encode the prior assumption that edge contours are smooth, one does not need to learn a finite-dimensional dictionary from a database of sampled image patches that each contains a circular object in order to up-convert images that contain circular edges. In the first portion of this thesis, we present a novel method for constructing non-stationary kernels that incorporates prior knowledge. A theorem is presented that ensures the result of this construction yields a symmetric and positive-definite kernel function. This construction does not require one to solve any non-identifiable optimization problems. It does require one to manually design some portions of the kernel while deferring the specification of the remaining portions to when an observation of the signal is available. In this sense, the resultant kernel is adaptive to the data observed. We give two examples of this construction technique via the grayscale image up-conversion task where we chose to incorporate the prior assumption that edge contours are smooth. Both examples use a novel local analysis algorithm that summarizes the p-most dominant directions for a given grayscale image patch. The non-stationary properties of these two types of kernels are empirically demonstrated on the Kodak image database that is popular within the image processing research community. Tensors and tensor decomposition methods are gaining popularity in the signal processing and machine learning literature, and most of the recently proposed tensor decomposition methods are based on the tensor power and alternating least-squares algorithms, which were both originally devised over a decade ago. The algebraic approach for the canonical polyadic (CP) symmetric tensor decomposition problem is an exception. This approach exploits the bijective relationship between symmetric tensors and homogeneous polynomials. The solution of a CP symmetric tensor decomposition problem is a set of p rank-one tensors, where p is fixed. In this thesis, we refer to such a set of tensors as a rank-one decomposition with cardinality p. Existing works show that the CP symmetric tensor decomposition problem is non-unique in the general case, so there is no bijective mapping between a rank-one decomposition and a symmetric tensor. However, a proposition in this thesis shows that a particular space of rank-one decompositions, SE, is isomorphic to a space of moment matrices that are called quasi-Hankel matrices in the literature. Optimization over Riemannian manifolds is an area of optimization literature that is also gaining popularity within the signal processing and machine learning community. Under some settings, one can formulate optimization problems over differentiable manifolds where each point is an equivalence class. Such manifolds are called quotient manifolds. This type of formulation can reduce or eliminate some of the sources of non-identifiability issues for certain optimization problems. An example is the learning of a basis for a subspace by formulating the solution space as a type of quotient manifold called the Grassmann manifold, while the conventional formulation is to optimize over a space of full column rank matrices. The second portion of this thesis is about the development of a general-purpose numerical optimization framework over SE. A general-purpose numerical optimizer can solve different approximations or regularized versions of the CP decomposition problem, and they can be applied to tensor-related applications that do not use a tensor decomposition formulation. The proposed optimizer uses many concepts from the Riemannian optimization literature. We present a novel formulation of SE as an embedded differentiable submanifold of the space of real-valued matrices with full column rank, and as a quotient manifold. Riemannian manifold structures and tangent space projectors are derived as well. The CP symmetric tensor decomposition problem is used to empirically demonstrate that the proposed scheme is indeed a numerical optimization framework over SE. Future investigations will concentrate on extending the proposed optimization framework to handle decompositions that correspond to non-symmetric tensors.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Morgan, William Russell IV. "Investigations into Parallelizing Rank-One Tensor Decompositions." Thesis, University of Maryland, Baltimore County, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10683240.

Повний текст джерела
Анотація:

Tensor Decompositions are a solved problem in terms of evaluating for a result. Performance, however, is not. There are several projects to parallelize tensor decompositions, using a variety of different methods. This work focuses on investigating other possible strategies for parallelization of rank-one tensor decompositions, measuring performance across a variety of tensor sizes, and reporting the best avenues to continue investigation

Стилі APA, Harvard, Vancouver, ISO та ін.
3

Sokal, Bruno. "Semi-blind receivers for multi-relaying mimo systems using rank-one tensor factorizations." reponame:Repositório Institucional da UFC, 2017. http://www.repositorio.ufc.br/handle/riufc/25988.

Повний текст джерела
Анотація:
SOKAL, B. Semi-blind receivers for multi-relaying mimo systems using rank-one tensor factorizations. 2017. 85 f. Dissertação (Mestrado em Engenharia de Teleinformática)-Centro de Tecnologia, Universidade Federal do Ceará, Fortaleza, 2017.
Submitted by Renato Vasconcelos (ppgeti@ufc.br) on 2017-09-15T20:38:29Z No. of bitstreams: 1 2017_dis_bsokal.pdf: 1689224 bytes, checksum: f0e2e9424de721f23bf0629ba55330c1 (MD5)
Rejected by Marlene Sousa (mmarlene@ufc.br), reason: Prezado Bruno: Existe uma orientação para que normalizemos as dissertações e teses da UFC, em suas paginas pré-textuais e lista de referencias, pelas regras da ABNT. Por esse motivo, sugerimos consultar o modelo de template, para ajudá-lo nesta tarefa, disponível em: http://www.biblioteca.ufc.br/educacao-de-usuarios/templates/ Vamos agora as correções sempre de acordo com o template: 1. As informações da capa, folha de rosto (que segue a capa) e ficha catalográfica devem ser em língua portuguesa, mesmo que sua dissertação esteja em língua inglesa. A partir da folha de aprovação, devem ser em língua inglesa. 2. Exemplificando a capa, as informações que devem aparecer são pela ordem (Toadas em Maiúsculo e negrito): Nome da universidade, do centro, do departamento e nome do programa; Nome do aluno; Título; Cidade e data. 2. A folha de rosto também tem informações que não são necessárias. Consulte o template para ver uso de maiúsculas, negrito e ordem de apresentação das informações. 3. A ficha catalográfica deve vir antes da folha de aprovação e não depois desta. 4. A folha de aprovação não deve ter as informações do quadro no alto da folha, nem deve ser em negrito. Veja modelo no template. 5. De acordo com a ABNT mesmo escrita em outro idioma, primeiro coloca-se o resumo na língua portuguesa e depois o Abstract. As palavras RESUMO e ABSTRCT vem ser em caixa alta, negrito e no centro da folha. Não devem iniciar com paragrafo. Essa folhas são contadas mas não numeradas. Só a partir da introdução é que são numeradas. 6. Veja no template a ordem das folhas a partir dos agradecimentos e como devem ser apresentadas. 7. Na lista de figuras mantenha o mesmo espaço entre as linhas. 8. O sumário não deve conter as informações anteriores a INTRODUÇÃO, deve ser em negrito e sem recuo de paragrafo. Observe o uso de Caixa alta, itálico nas seções. Após a conclusão devem vir os APÊNDICES e as REFERENCIAS. 9. Na lista de referencias, pela ABNT, deve-se iniciar pelo sobrenome do autor, seguido do prenome. Elaboramos ferramentas para ajuda-lo a gerar as referencias e gerenciadores bibliográficos disponivel em: http://www.biblioteca.ufc.br/ferramentas-de-pesquisa/ Em artigos de revistas usa-se a seguinte nomenclatura para volume, numero e páginas: v. , n. , p. Não se destacam subtítulos e nos artigos de revistas se destaca-se apenas o ´nome da revista. Att. Marlene Rocha 3366-9620 mmarlene@ufc.br on 2017-09-18T11:38:11Z (GMT)
Submitted by Renato Vasconcelos (ppgeti@ufc.br) on 2017-09-20T14:04:25Z No. of bitstreams: 1 2017_dis_bsokal.pdf: 1481998 bytes, checksum: bdf4f504f50622f6b0e2084361272481 (MD5)
Approved for entry into archive by Marlene Sousa (mmarlene@ufc.br) on 2017-09-21T17:44:15Z (GMT) No. of bitstreams: 1 2017_dis_bsokal.pdf: 1481998 bytes, checksum: bdf4f504f50622f6b0e2084361272481 (MD5)
Made available in DSpace on 2017-09-21T17:44:15Z (GMT). No. of bitstreams: 1 2017_dis_bsokal.pdf: 1481998 bytes, checksum: bdf4f504f50622f6b0e2084361272481 (MD5) Previous issue date: 2017-07-27
Cooperative communications have shown to be an alternative to combat the impairments of signal propagation in wireless communications, such as path loss and shadowing, creating a virtual array of antennas for the source. In this work, we start with a two-hop MIMO system using a a single relay. By adding a space-time filtering step at the receiver, we propose a rank-one tensor factorization model for the resulting signal. Exploiting this model, two semi-blind receivers for joint symbol and channel estimation are derived: i) an iterative receiver based on the trilinear alternating least squares (Tri-ALS) algorithm and ii) a closed-form receiver based on the truncated higher order SVD (T-HOSVD). For this system, we also propose a space-time coding tensor having a PARAFAC decomposition structure, which gives more flexibility to system design, while allowing an orthogonal coding. In the second part of this work, we present an extension of the rank-one factorization approach to a multi-relaying scenario and a closed-form semi-blind receiver based on coupled SVDs (C-SVD) is derived. The C-SVD receiver efficiently combines all the available cooperative links to enhance channel and symbol estimation performance, while enjoying a parallel implementation.
Comunicações cooperativas têm mostrado ser uma alternativa para combater os efeitos de propagação do sinal em comunicações sem-fio, como, por exemplo, a perda por percurso e sombreamento, criando um array virtual de antenas para a fonte transmissora. Neste trabalho, toma-se como ponto de partida um modelo de sistema MIMO de dois saltos com um único relay. Adicionando um estágio de filtragem no receptor, é proposta uma fatoração de rank-um para o sinal resultante. A partir deste modelo, dois receptores semi-cegos para estimação conjunta de símbolo e canal são propostos: i) um receptor iterativo baseado no algoritmo trilinear de mínimos quadrados alternados (Tri-ALS) e ii) um receptor de solução fechada baseado na SVD de ordem superior truncada (T-HOSVD). Para este sistema, é também proposto um tensor de codificação espacial-temporal com uma estrutura PARAFAC, o que permite maior flexibilidade de design do sistema, além de uma codificação ortogonal. Na segunda parte deste trabalho, é apresentada uma extensão da fatoração de rank-um para o cenário multi-relay e um receptor semi-cego de solução fechada baseado em SVD's acopladas (C-SVD) é desenvolvido. O receptor C-SVD combina de modo eficiente todos os links cooperativos disponíveis, melhorando o desempenho da estimação de símbolos e de canal, além de oferecer uma implementação paralelizável.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ossman, Hala. "Etude mathématique de la convergence de la PGD variationnelle dans certains espaces fonctionnels." Thesis, La Rochelle, 2017. http://www.theses.fr/2017LAROS006/document.

Повний текст джерела
Анотація:
On s’intéresse dans cette thèse à la PGD (Proper Generalized Decomposition), l’une des méthodes de réduction de modèles qui consiste à chercher, a priori, la solution d’une équation aux dérivées partielles sous forme de variables séparées. Ce travail est formé de cinq chapitres dans lesquels on vise à étendre la PGD aux espaces fractionnaires et aux espaces des fonctions à variation bornée, et à donner des interprétations théoriques de cette méthode pour une classe de problèmes elliptiques et paraboliques. Dans le premier chapitre, on fait un bref aperçu sur la littérature puis on présente les notions et outils mathématiques utilisés dans le corps de la thèse. Dans le second chapitre, la convergence des suites des directions alternées (AM) pour une classe de problèmes variationnels elliptiques est étudiée. Sous une condition de non-orthogonalité uniforme entre les itérés et le terme source, on montre que ces suites sont en général bornées et compactes. Alors, si en particulier la suite (AM) converge faiblement alors elle converge fortement et la limite serait la solution du problème de minimisation alternée. Dans le troisième chapitre, on introduit la notion des dérivées fractionnaires au sens de Riemann-Liouville puis on considère un problème variationnel qui est une généralisation d’ordre fractionnaire de l’équation de Poisson. En se basant sur la nature quadratique et la décomposabilité de l’énergie associée, on démontre que la suite PGD progressive converge fortement vers la solution faible de ce problème. Dans le quatrième chapitre, on profite de la structure tensorielle des espaces BV par rapport à la topologie faible étoile pour définir les suites PGD dans ce type d’espaces. La convergence de telle suite reste une question ouverte. Le dernier chapitre est consacré à l’équation de la chaleur d-dimensionnelle, où on discrétise en temps puis à chaque pas de temps on cherche la solution de l’équation elliptique en utilisant la PGD. On montre alors que la fonction affine par morceaux en temps obtenue à partir des solutions construites en utilisant la PGD converge vers la solution faible de l’équation
In this thesis, we are interested in the PGD (Proper Generalized Decomposition), one of the reduced order models which consists in searching, a priori, the solution of a partial differential equation in a separated form. This work is composed of five chapters in which we aim to extend the PGD to the fractional spaces and the spaces of functions of bounded variation and to give theoretical interpretations of this method for a class of elliptic and parabolic problems. In the first chapter, we give a brief review of the litterature and then we introduce the mathematical notions and tools used in this work. In the second chapter, the convergence of rank-one alternating minimisation AM algorithms for a class of variational linear elliptic equations is studied. We show that rank-one AM sequences are in general bounded in the ambient Hilbert space and are compact if a uniform non-orthogonality condition between iterates and the reaction term is fulfilled. In particular, if a rank-one (AM) sequence is weakly convergent then it converges strongly and the common limit is a solution of the alternating minimization problem. In the third chapter, we introduce the notion of fractional derivatives in the sense of Riemann-Liouville and then we consider a variational problem which is a generalization of fractional order of the Poisson equation. Basing on the quadratic nature and the decomposability of the associated energy, we prove that the progressive PGD sequence converges strongly towards the weak solution of this problem. In the fourth chapter, we benefit from tensorial structure of the spaces BV with respect to the weak-star topology to define the PGD sequences in this type of spaces. The convergence of this sequence remains an open question. The last chapter is devoted to the d-dimensional heat equation, we discretize in time and then at each time step one seeks the solution of the elliptic equation using the PGD. Then, we show that the piecewise linear function in time obtained from the solutions constructed using the PGD converges to the weak solution of the equation
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Sodomaco, Luca. "The Distance Function from the Variety of partially symmetric rank-one Tensors." Doctoral thesis, 2020. http://hdl.handle.net/2158/1220535.

Повний текст джерела
Анотація:
The topic of this doctoral thesis is at the intersection between Real Algebraic Geometry, Optimization Theory and Multilinear Algebra. In particular, a relevant part of this thesis is dedicated to studying metric invariants of real algebraic varieties, with a particular interest in varieties in tensor spaces. In many applications, tensors arise as a useful way to store and organize experimental data. For example, it is widely known that tensor techniques are extremely useful in Algebraic Statistics. A strong relationship between classical algebraic geometry and multilinear algebra is established by the notion of tensor rank. Geometrically speaking, the problem of computing the rank of a tensor translates to a membership problem to a certain secant variety of a Segre product of projective spaces. In the last fifteen years, a new line of research in tensor theory has been undertaken and is commonly known as Spectral Theory of Tensors. One of the foundational motivations of this theory comes from the need, e.g., in some constrained optimization problems, to approximate a given tensor to its closest tensor of fixed lower rank, with respect to the Frobenius norm, also known as Bombieri norm. This is the so-called best rank- k approximation problem for real tensors. In this context, an important role is played by the singular vector tuples and the singular values of a tensor, which generalize the notions of eigenvector and eigenvalue of a matrix. Their symmetric counterpart is represented by the E-eigenvalues and the E-eigenvectors of a symmetric tensor. Of particular interest is the E-characteristic polynomial of a symmetric tensor, which has among its roots the E-eigenvalues of a symmetric tensor. For symmetric matrices, it coincides with the classical characteristic polynomial. We interpret the E-characteristic polynomial as an algebraic relation satisfied by the Frobenius distance between an assigned symmetric tensor and the dual affine cone of a Veronese variety. We show that the E-characteristic polynomial is monic only in the symmetric matrix case. We provide a rational formula for the product of the singular values of a partially symmetric tensor of hypercubic format. The formula generalizes the fact that the determinant of a symmetric matrix is equal to the product of its eigenvalues. This is the only case where no denominator occurs in the formula. Computing the distance from a variety of low-rank tensors is an important instance of a more general problem: computing the distance from a real algebraic variety X in a Euclidean space (V,q). We introduce a polynomial, called Euclidean Distance polynomial of X, which, for any data point u in V, has among its roots the distance ε from u to X. The ε^2-degree of the ED polynomial is the known Euclidean Distance degree of X. When X is transversal to the isotropic quadric Q={q=0}, we show that the ED polynomial of X is monic and we describe its lowest term completely.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Rank-one tensors"

1

Liu, Chang, Kun He, Ji-liu Zhou, and Chao-Bang Gao. "Discriminant Orthogonal Rank-One Tensor Projections for Face Recognition." In Intelligent Information and Database Systems, 203–11. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20042-7_21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kobayashi, Toshiyuki, and Birgit Speh. "Minor Summation Formulæ Related to Exterior Tensor $$\begin{array}{lll}\bigwedge^i\;(\mathbb{C}^n)\end{array}$$." In Symmetry Breaking for Representations of Rank One Orthogonal Groups II, 111–18. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-2901-2_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kaimakamis, George, and Konstantina Panagiotidou. "The *-Ricci Tensor of Real Hypersurfaces in Symmetric Spaces of Rank One or Two." In Springer Proceedings in Mathematics & Statistics, 199–210. Tokyo: Springer Japan, 2014. http://dx.doi.org/10.1007/978-4-431-55215-4_18.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Oertel, Gerhard. "Effects of Stress." In Stress and Deformation. Oxford University Press, 1996. http://dx.doi.org/10.1093/oso/9780195095036.003.0011.

Повний текст джерела
Анотація:
The simplest relationship between stress and strain is Hooke’s law, describing the linear elastic response of solids to stress. Elastic strain (almost in all cases small) is proportional to the applied stress, with one proportionality factor expressing the relationship between normal, and another that between tangential stress and strain. An ideally elastic strain is completely reversed upon removal of the stress that has caused it. Most materials obey Hooke’s law somewhat imperfectly, and that only up to a critical yield stress beyond which they begin to flow and to acquire, in addition to the elastic strain, a permanent strain that does not revert upon stress release. Hooke’s law in this form is applied to materials that are elastically isotropic, or can be assumed to be approximately so. Crystals, however, never are elastically isotropic, nor are crystalline materials consisting of constituent grains with a distribution of crystallographic orientations that departs from being uniform. The response of a crystal to a stress (at a level below the yield stress) consists of a strain determined by a matter tensor of the fourth rank, the compliance tensor s i j k l : . . . ɛij = s i j k l σkl, (7.1) . . . the 81 components of which are constants. Any tensor that describes the linear relationship between two tensors of the second rank is necessarily of the fourth rank, and like other tensors of the fourth rank, the compliance tensor can be referred to a new set of reference coordinates by means of a rotation matrix aij: s i j k l = aimajnakoalp smnop. (7.2) . . . The components of the compliance tensor are highly redundant, first because both the stress and the strain tensors are symmetric, and second because the tensor itself is symmetric. The number of independent components for crystals of the lowest, triclinic (both classes) symmetry is 21, and with increasing crystal symmetry the redundancies become more numerous; only three independent compliances are needed to describe the elastic properties of a cubic crystal.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ting, T. T. C. "Transformation of the Elasticity Matrices and Dual Coordinate Systems." In Anisotropic Elasticity. Oxford University Press, 1996. http://dx.doi.org/10.1093/oso/9780195074475.003.0010.

Повний текст джерела
Анотація:
When the elasticity matrices are referred to a rotated coordinate system their elements change and assume different values. We will show in this chapter that, under rotations about the x3-axis, the matrices A and B are tensors of rank one while S, H, L, and M are tensors of rank two. These properties are important in establishing certain invariants that are physically interesting and puzzling. We will also present the amazing Barnett-Lothe integral formalism that allows us to determine S, H, and L without computing the eigenvalues and eigenvectors of elastic constants. New tensors Ni(θ) (i=l,2,3), S(θ), H(θ), L(θ), and Gi(θ) (i=1,3) are introduced, and their properties as well as identities relating them are presented. Also introduced is the idea of dual coordinate systems where the position of a point is referred to one coordinate system while the displacement components are referred to another coordinate system. These will be useful in applications. As in Chapter 6 readers may skip this chapter in the first reading. They can return to this chapter later for specific information.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ting, T. T. C. "The Structures and Identities of the Elasticity Matrices." In Anisotropic Elasticity. Oxford University Press, 1996. http://dx.doi.org/10.1093/oso/9780195074475.003.0009.

Повний текст джерела
Анотація:
The matrices Q, R, T, A, B, N1, N2, N3, S, H, L, and M introduced in the previous chapter are the elasticity matrices. They depend on elastic constants only, and appear frequently in the solutions to two-dimensional problems. The matrices A, B, and M are complex while the others are real. We present their structures and identities relating them in this chapter. In Chapter 7 we will show that A and B are tensors of rank one and S, H, L, and M are tensors of rank two when the transformation is a rotation about the x3-axis. Readers who are not interested in how the structures of these matrices and the identities relating them are derived may skip this chapter. They may return to this chapter when they read later chapters on applications where the results presented here are employed.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Deng, Zhaoxian, and Zhiqiang Zeng. "Multi-View Subspace Clustering by Combining ℓ2,p-Norm and Multi-Rank Minimization of Tensors." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2022. http://dx.doi.org/10.3233/faia220020.

Повний текст джерела
Анотація:
In this article, based on the self-represented multi-view subspace clustering framework, we propose a new clustering model. Based on the assumption that different features can be linearly represented by data mapped to different subspaces, multiview subspace learning methods take advantage of the complementary and consensus informations between various kind of views of the data can boost the clustering performance. We search for the tensor with the lowest rank and then extract the frontal slice of it to establish a well-structured affinity matrix. Based on the tensor singular value decomposition (t-SVD), our low-rank constraint can be achieved. We impose the ℓ2,p-norm to flexibly control the sparsity of the error matrix, making it more robust to noise, which will enhance the robustness of our clustering model. With combining ℓ2,p-norm and tensor multi-rank minimization, the proposed Multi-view Subspace Clustering(MVSC) model can effectively perform clustering with multiple data resources. We test our model on one real-world spoon dataset and several publicly availabe datasets. Extensive evaluation methods have proved that our model is effective and efficient.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Green, Mark, Phillip Griffiths, and Matt Kerr. "Classification of Mumford-Tate Subdomains." In Mumford-Tate Groups and Domains. Princeton University Press, 2012. http://dx.doi.org/10.23943/princeton/9780691154244.003.0008.

Повний текст джерела
Анотація:
This chapter develops an algorithm for determining all Mumford-Tate subdomains of a given period domain. The result is applied to the classification of all complex multiplication Hodge structures (CM Hodge structures) of rank 4 and when the weight n = 1 and n = 3, to an analysis of their Hodge tensors and endomorphism algebras, and the number of components of the Noether-Lefschetz locus. The result is that one has a complex but very rich arithmetic story. Of particular note is the intricate structure of the components of the Noether-Lefschetz loci in D and in its compact dual, and the two interesting cases where the Hodge tensors are generated in degrees 2 and 4. One application is that a particular class of period maps appearing in mirror symmetry never has image in a proper subdomain of D.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Newnham, Robert E. "Thermodynamic relationships." In Properties of Materials. Oxford University Press, 2004. http://dx.doi.org/10.1093/oso/9780198520757.003.0008.

Повний текст джерела
Анотація:
In the next few chapters we shall discuss tensors of rank zero to four which relate the intensive variables in the outer triangle of the Heckmann Diagram to the extensive variables in the inner triangle. Effects such as pyroelectricity, permittivity, pyroelectricity, and elasticity are the standard topics in crystal physics that allow us to discuss tensors of rank one through four. First, however, it is useful to introduce the thermodynamic relationships between physical properties and consider the importance of measurement conditions. Before discussing all the cross-coupled relationships, we first define the coupling within the three individual systems. In a thermal system, the basic relationship is between change in entropy δS [J/m3] and change in temperature δT [K]: . . . δS = CδT, . . . where C is the specific heat per unit volume [J/m3 K] and T is the absolute temperature. S, T, and C are all scalar quantities. In a dielectric system the electric displacement Di [C/m2] changes under the influence of the electric field Ei [V/m]. Both are vectors and therefore the electric permittivity, εij , requires two-directional subscripts. Occasionally the dielectric stiffness, βij , is required as well. . . . Di = εijEj Ei = βijDj. . . . Some authors use polarization P rather than electric displacement D. The three variables are interrelated through the constitutive relation . . . Di = Pi + ε0Ei = εijEj. . . . The third linear system in the Heckmann Diagram is mechanical, relating strain xij to stress Xkl [N/m2] through the fourth rank elastic compliance coefficients sijkl [m2/N]. . . . xij = sijklXkl. . . . Alternatively, Hooke’s Law can be expressed in terms of the elastic stiffness coefficients cijkl [N/m2]. . . Xij = cijklxkl. . . . When cross coupling occurs between thermal, electrical, and mechanical variables, the Gibbs free energy G(T, X, E) is used to derive relationships between the property coefficients. Temperature T, stress X, and electric field E are the independent variables in most experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Newnham, Robert E. "Diffusion and ionic conductivity." In Properties of Materials. Oxford University Press, 2004. http://dx.doi.org/10.1093/oso/9780198520757.003.0021.

Повний текст джерела
Анотація:
The phenomenon of atomic and ionic migration in crystals is called solidstate diffusion, and its study has shed light on many problems of technological and scientific importance. Diffusion is intimately connected to the strength of metals at high temperature, to metallurgical processes used to control alloy properties, and to many of the effects of radiation on nuclear reactor materials. Diffusion studies are important in understanding the ionic conductivity of the materials used in fuel cells, the fabrication of semiconductor integrated circuits, the corrosion of metals, and the sintering of ceramics. When two miscible materials are in contact across an interface, the quantity of diffusing material which passes through the interface is proportional to the concentration gradient. The atomic flux J is given by where J is measured per unit time and per unit area, c is the concentration of the diffusing material per unit volume, and Z is the gradient direction. The proportionality factor D, the diffusion coefficient, is measured in units of m2/s. This equation is sometimes referred to as Fick’s First Law. It describes atomic transport in a form that is analogous to electrical resistivity (Ohm’s Law) or thermal conductivity. There are several objections to Fick’s Law, as discussed in Section 19.5. Strictly speaking, it is valid only for self-diffusion coefficients measured in small concentration gradients. Since J and Z are both vectors, the diffusion coefficient D is a second rank tensor. As with other symmetric second rank tensors, between one and six measurements are required to specify Dij , depending on symmetry. The relationship between structure and anisotropy is more apparent in PbI2. Lead iodide is isostructural with CdI2 in trigonal point group.m. The self-diffusion of Pb is much easier parallel to the layers where the Pb atoms are in close proximity to one another. Diffusion is more difficult along Z3 = [001] because Pb atoms have a very long jump distance in this direction. The mineral olivine, (Mg, Fe)2SiO4, is an important constituent of the deeper parts of the earth’s crust.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Rank-one tensors"

1

Najafi, Mehrnaz, Lifang He, and Philip S. Yu. "Outlier-Robust Multi-Aspect Streaming Tensor Completion and Factorization." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/442.

Повний текст джерела
Анотація:
With the increasing popularity of streaming tensor data such as videos and audios, tensor factorization and completion have attracted much attention recently in this area. Existing work usually assume that streaming tensors only grow in one mode. However, in many real-world scenarios, tensors may grow in multiple modes (or dimensions), i.e., multi-aspect streaming tensors. Standard streaming methods cannot directly handle this type of data elegantly. Moreover, due to inevitable system errors, data may be contaminated by outliers, which cause significant deviations from real data values and make such research particularly challenging. In this paper, we propose a novel method for Outlier-Robust Multi-Aspect Streaming Tensor Completion and Factorization (OR-MSTC), which is a technique capable of dealing with missing values and outliers in multi-aspect streaming tensor data. The key idea is to decompose the tensor structure into an underlying low-rank clean tensor and a structured-sparse error (outlier) tensor, along with a weighting tensor to mask missing data. We also develop an efficient algorithm to solve the non-convex and non-smooth optimization problem of OR-MSTC. Experimental results on various real-world datasets show the superiority of the proposed method over the baselines and its robustness against outliers.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Vora, Jian, Karthik S. Gurumoorthy, and Ajit Rajwade. "Recovery of Joint Probability Distribution from One-Way Marginals: Low Rank Tensors and Random Projections." In 2021 IEEE Statistical Signal Processing Workshop (SSP). IEEE, 2021. http://dx.doi.org/10.1109/ssp49050.2021.9513818.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yang, Chaoqi, Cheng Qian, and Jimeng Sun. "GOCPT: Generalized Online Canonical Polyadic Tensor Factorization and Completion." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/326.

Повний текст джерела
Анотація:
Low-rank tensor factorization or completion is well-studied and applied in various online settings, such as online tensor factorization (where the temporal mode grows) and online tensor completion (where incomplete slices arrive gradually). However, in many real-world settings, tensors may have more complex evolving patterns: (i) one or more modes can grow; (ii) missing entries may be filled; (iii) existing tensor elements can change. Existing methods cannot support such complex scenarios. To fill the gap, this paper proposes a Generalized Online Canonical Polyadic (CP) Tensor factorization and completion framework (named GOCPT) for this general setting, where we maintain the CP structure of such dynamic tensors during the evolution. We show that existing online tensor factorization and completion setups can be unified under the GOCPT framework. Furthermore, we propose a variant, named GOCPTE, to deal with cases where historical tensor elements are unavailable (e.g., privacy protection), which achieves similar fitness as GOCPT but with much less computational cost. Experimental results demonstrate that our GOCPT can improve fitness by up to 2.8% on the JHU Covid data and 9.2% on a proprietary patient claim dataset over baselines. Our variant GOCPTE shows up to 1.2% and 5.5% fitness improvement on two datasets with about 20% speedup compared to the best model.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Phan, Anh-Huy, Petr Tichavsky, and Andrzej Cichocki. "Rank-one tensor injection: A novel method for canonical polyadic tensor decomposition." In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016. http://dx.doi.org/10.1109/icassp.2016.7472137.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Hou, Jingyao, Feng Zhang, Yao Wang, and Jianjun Wang. "Low-Tubal-Rank Tensor Recovery From One-Bit Measurements." In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9054163.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Vandecappelle, Michiel, Nico Vervliet, and Lieven De Lathauwer. "Rank-one Tensor Approximation with Beta-divergence Cost Functions." In 2019 27th European Signal Processing Conference (EUSIPCO). IEEE, 2019. http://dx.doi.org/10.23919/eusipco.2019.8902937.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ghassemi, Mohsen, Zahra Shakeri, Anand D. Sarwate, and Waheed U. Bajwa. "STARK: Structured dictionary learning through rank-one tensor recovery." In 2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP). IEEE, 2017. http://dx.doi.org/10.1109/camsap.2017.8313164.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hua, Gang, Paul A. Viola, and Steven M. Drucker. "Face Recognition using Discriminatively Trained Orthogonal Rank One Tensor Projections." In 2007 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2007. http://dx.doi.org/10.1109/cvpr.2007.383107.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hongcheng Wang and N. Ahuja. "Compact representation of multidimensional data using tensor rank-one decomposition." In Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004. IEEE, 2004. http://dx.doi.org/10.1109/icpr.2004.1334001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Li, Ping, Jiashi Feng, Xiaojie Jin, Luming Zhang, Xianghua Xu, and Shuicheng Yan. "Online Robust Low-Rank Tensor Learning." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/303.

Повний текст джерела
Анотація:
The rapid increase of multidimensional data (a.k.a. tensor) like videos brings new challenges for low-rank data modeling approaches such as dynamic data size, complex high-order relations, and multiplicity of low-rank structures. Resolving these challenges require a new tensor analysis method that can perform tensor data analysis online, which however is still absent. In this paper, we propose an Online Robust Low-rank Tensor Modeling (ORLTM) approach to address these challenges. ORLTM dynamically explores the high-order correlations across all tensor modes for low-rank structure modeling. To analyze mixture data from multiple subspaces, ORLTM introduces a new dictionary learning component. ORLTM processes data streamingly and thus requires quite low memory cost that is independent of data size. This makes ORLTM quite suitable for processing large-scale tensor data. Empirical studies have validated the effectiveness of the proposed method on both synthetic data and one practical task, i.e., video background subtraction. In addition, we provide theoretical analysis regarding computational complexity and memory cost, demonstrating the efficiency of ORLTM rigorously.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії