Дисертації з теми "Multilinéaire"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-31 дисертацій для дослідження на тему "Multilinéaire".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Pellet--Mary, Alice. "Réseaux idéaux et fonction multilinéaire GGH13." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEN048/document.
Lattice-based cryptography is a promising area for constructing cryptographic primitives that are plausibly secure even in the presence of quantum computers. A fundamental problem related to lattices is the shortest vector problem (or SVP), which asks to find a shortest non-zero vector in a lattice. This problem is believed to be intractable, even quantumly. Structured lattices, for example ideal lattices or module lattices (the latter being a generalization of the former), are often used to improve the efficiency of lattice-based primitives. The security of most of the schemes based on structured lattices is related to SVP in module lattices, and a very small number of schemes can also be impacted by SVP in ideal lattices.In this thesis, we first focus on the problem of finding short vectors in ideal and module lattices.We propose an algorithm which, after some exponential pre-computation, performs better on ideal lattices than the best known algorithm for arbitrary lattices. We also present an algorithm to find short vectors in rank 2 modules, provided that we have access to some oracle solving the closest vector problem in a fixed lattice. The exponential pre-processing time and the oracle call make these two algorithms unusable in practice.The main scheme whose security might be impacted by SVP in ideal lattices is the GGH13multilinear map. This protocol is mainly used today to construct program obfuscators, which should render the code of a program unintelligible, while preserving its functionality. In a second part of this thesis, we focus on the GGH13 map and its application to obfuscation. We first study the impact of statistical attacks on the GGH13 map and on its variants. We then study the security of obfuscators based on the GGH13 map and propose a quantum attack against multiple such obfuscators. This quantum attack uses as a subroutine an algorithm to find a short vector in an ideal lattice related to a secret element of the GGH13 map
Rialland, Annie. "Systèmes prosodiques africains : ou fondements empiriques pour un modèle multilinéaire." Nice, 1988. http://www.theses.fr/1988NICE2019.
Castaing, Joséphine. "Méthodes PARAFAC pour la séparation de signaux." Cergy-Pontoise, 2006. http://biblioweb.u-cergy.fr/theses/06CERG0324.pdf.
In different applications, the observed signals can be stacked in a third-order tensor that can be decomposed in a sum of rank-one tensors. Such a decomposition is called PARAFAC. Our work presents a new method to estimate the parameters of the decomposition based on a simultaneous diagonalization and yields a new bound on the number of these parameters. We apply this method to CDMA signals, which have the PARAFAC structure. Moreover, we propose to combine PARAFAC structure and constant modulus constraint on the sources. We also show that it is possible to exploit the algebraic structure of the data to perform Independent Component Analysis in the underdetermined case. Finally, we study the rank of a random tensor, called generic rank, and we propose a technique to compute this rank in some particular cases
González-Mazón, Pablo. "Méthodes effectives pour les transformations birationnelles multilinéaires et contributions à l'analyse polynomiale de données." Electronic Thesis or Diss., Université Côte d'Azur, 2023. http://www.theses.fr/2023COAZ4138.
This thesis explores two distinct subjects at the intersection of commutative algebra, algebraic geometry, multilinear algebra, and computer-aided geometric design:1. The study and effective construction of multilinear birational maps2. The extraction of information from measures and data using polynomialsThe primary and most extensive part of this work is devoted to multilinear birational maps.A multilinear birational map is a rational map phi: (mathbb{P}^1)^n dashrightarrow{} mathbb{P}^n, defined by multilinear polynomials, which admits an inverse rational map. Birational transformations between projective spaces have been a central theme in algebraic geometry, tracing back to the seminal works of Cremona, which has witnessed significant advancement in the last decades. Additionally, there has been a recent surge of interest in tensor-product birational maps, driven by the study of multiprojective spaces in commutative algebra and their practical application in computer-aided geometric design.In the first part, we address algebraic and geometric aspects of multilinear birational maps.We primarily focus on trilinear birational maps phi: (mathbb{P}^1)^3 dashrightarrow{} mathbb{P}^3, that we classify according to the algebraic structure of their space, base loci, and the minimal graded free resolutions of the ideal generated by the defining polynomials. Furthermore, we develop the first methods for constructing and manipulating nonlinear birational maps in 3D with sufficient flexibility for geometric modeling and design.Interestingly, we discover a characterization of birationality based on tensor rank, which yields effective constructions and opens the door to the application of tools from tensors to birationality. We also extend our results to multilinear birational maps in arbitrary dimension, in the case that there is a multilinear inverse.In the second part, our focus shifts to the application of polynomials in analyzing data and measures.We tackle two distinct problems. Firstly, we derive bounds for the size of (1-epsilon)-nets for superlevel sets of real polynomials. Our results allow us to extend the classical centerpoint theorem to polynomial inequalities of higher degree. Secondly, we address the classification of real cylinders through five-point configurations where four points are cocyclic, i.e. they lie on a circumference. This is an instance of the more general problems of real root classification of systems of real polynomials and the extraction of algebraic primitives from raw data
Miron, Sebastian. "Méthodes multilinéaires et hypercomplexes en traitement d'antenne multicomposante haute résolution." Phd thesis, Grenoble INPG, 2005. http://www.theses.fr/2005INPG0102.
This research is devoted 1,0 vector-sensor array processing methods. The signaIs recorded on a vector-sensor array allow the estimation of the direction of arrivaI and polarization for multiple waves impinging on the antenna. We show how the correct use of polarization information improves the performance of algorithms. The novelty of the presented work consists in the use of mathematical models well-adapted 1,0 the intrinsic nature of vectorial signaIs. The first approach is based on a multilinear model of polarization that preserves the intrinsic structure of multicomponent acquisition. Ln this case, the data covariance model is represented by a cross-spectral tensor. We propose two algorithms (Vector-MUSIC and Higher-Order MUSIC) based on orthogonal decompositions of the cross-spectral tensor. We show in simulations that the use of this model and of the multilinear orthogonal decompositions improve the performance of the proposed methods compared to classical techniques based on linear algebra. A second approach uses hypercomplex algebras. Quaternion and biquaternion vectors are used to model the polarized signaIs recorded on two, three or four component sensor arrays. Quaternion-MUSIC and Biquaternion-MUSIC algorithrns, based on the diagonalization of quaternion and biquaternion matrices are introduced. We show that the use of hypercornplex numbers reduces the computational burden and increases the solution power of the methods
Heraud, Nicolas. "Validation de données et observabilité des systèmes multilinéairesé." Vandoeuvre-les-Nancy, INPL, 1991. http://www.theses.fr/1991INPL082N.
The aim of this study is to investigate data validation and observability of miltilinear systems to diagnose instrumentation in a process. Data validation and observability in linear systems are first reviewed and these notions are extended to multilinear systems. Differents methods such as hierarchical computation, constraint linearization and penalization functions, are presented to estimate true values when some values are lacking. After comparing the different methods, a recurrent calculus of estimates using constraint linearization and penalization functions is developed. An observable system is required in order to perform data validation. Thus, we developed an original method, based on arborescent diagrams. The technique of data validation has been successfully applied to a complex uranium processing plant owned by the French company Total Compagnie Minière France. On this partially instrumented process, measurements for volumic flow, density and uranium in both solid and liquid phase are available. The analysis allows first to obtain coherent date. Furthemore, it can be used to detect sensors faults
Letexier, Damien. "Filtrages tensoriels adaptatifs pour la restauration d'images multidimensionnelles." Aix-Marseille 3, 2009. http://www.theses.fr/2009AIX30019.
This thesis is devoted to multidimensional signal processing. The main interest of the proposed methods relies on the tensor modeling of data sets. Therefore, the whole parameters are considered while processing tensors. Multilinear algebra tools are required to design the presented multidimensional filters : higher order singular value troncature, lower rank tensor approximation or multidimensional Wiener filtering. However, these filters use an orthogonal flattening step that may not be adapted to data. A new method is proposed to avoid this shortcoming. This is useful for image applications such as color or hyperspectral images. It is shown that the signal to noise ratio can be improved if flattening directions are chose properly. This manuscript also propose a new method including higher order statistics to remove Gaussian components from multidimensional data. Some examples are given for color and hyperspectral images
Karfoul, Ahmad. "Canonical decomposition of hermitian arrays : application to ICA and blind underdetermined mixture identification." Rennes 1, 2009. http://www.theses.fr/2009REN1S215.
The goal of this thesis is to propose new methods for CANonical Decomposition (CAND) of Higher Order (HO) Hermitian arrays. The main motivation is to solve the Independent Component Analysis (ICA) and the blind under-determined mixture identification problems. First, we propose a new family of methods to jointly decompose several HO Hermitian arrays. This family involves two different approaches; semi-algebraic and iterative. Regarding the iterative one we propose a new ALS (Alternate Least Square)-like method that contrary to the classical ALS one fully exploits the symmetry in the HO array it processes. Second, we evaluate the impact of exploiting the symmetry that resides in the HO array we process. That is in terms of convergence speed and numerical complexity giving rise to a new iterative algorithm. Third, we propose an efficient way to rely iterative approaches to semi-algebraic ones in such a way a better solution is guaranteed. Finally the numerical complexity of different ICA methods is studied
Renard, Nadine. "Traitement du signal tensoriel. Application à l'imagerie hyperspectrale." Aix-Marseille 3, 2008. http://www.theses.fr/2008AIX30062.
This thesis focus on developing new algebraic methods for hyperspectral applications. The proposed method are original because based on new data representation using third-order tensor. This data representation involves the use of multilinear algebra tools. The proposed methods are referred to as multiway or multimodal methods. TUCKER tensor decompositionbased methods jointly analyze the spatial and spectral modes using an alternating least squares algorithm. This thesis focus on two problematics specific to hyperspectral images. The first one concerns noise reduction. The considered additive noise is due to the acquisition system and degrades the target detection efficiency. A robust to noise detection technique is proposed by incorporating a multimodal Wiener filter. The spatial and spectral n-mode filters are estimated by minimizing the mean squared error between the desired and estimated tensors. The second problematic is the spectral dimension reduction. The curse of the dimensionality degrades the statistical estimation for the classification process. For this issue, the proposed multimodal reduction method reduces the spectral mode by linear transformation jointly to the lower spatial ranks approximation. This method extends the traditional dimension reduction methods. These two multimodal methods are respectively assessed in respect to their impact on detection and classification efficiency. These results highlight the interest of the spatial/spectral analysis in comparison to the traditional spectral analysis only and the hybrid ones which process sequentially the spectral and the spatial mode
Boizard, Mélanie. "Développement et études de performances de nouveaux détecteurs/filtres rang faible dans des configurations RADAR multidimensionnelles." Electronic Thesis or Diss., Cachan, Ecole normale supérieure, 2013. http://www.theses.fr/2013DENS0063.
Most of statistical signal processing algorithms, are based on the use of signal covariance matrix. In practical cases this matrix is unknown and is estimated from samples. The adaptive versions of the algorithms can then be applied, replacing the actual covariance matrix by its estimate. These algorithms present a major drawback: they require a large number of samples in order to obtain good results. If the covariance matrix is low-rank structured, its eigenbasis may be separated in two orthogonal subspaces. Thanks to the LR approximation, orthogonal projectors onto theses subspaces may be used instead of the noise CM in processes, leading to low-rank algorithms. The adaptive versions of these algorithms achieve similar performance to classic classic ones with less samples. Furthermore, the current increase in the size of the data strengthens the relevance of this type of method. However, this increase may often be associated with an increase of the dimension of the system, leading to multidimensional samples. Such multidimensional data may be processed by two approaches: the vectorial one and the tensorial one. The vectorial approach consists in unfolding the data into vectors and applying the traditional algorithms. These operations are not lossless since they involve a loss of structure. Several issues may arise from this loss: decrease of performance and/or lack of robustness. The tensorial approach relies on multilinear algebra, which provides a good framework to exploit these data and preserve their structure information. In this context, data are represented as multidimensional arrays called tensor. Nevertheless, generalizing vectorial-based algorithms to the multilinear algebra framework is not a trivial task. In particular, the extension of low-rank algorithm to tensor context implies to choose a tensor decomposition in order to estimate the signal and noise subspaces. The purpose of this thesis is to derive and study tensor low-rank algorithms. This work is divided into three parts. The first part deals with the derivation of theoretical performance of a tensor MUSIC algorithm based on Higher Order Singular Value Decomposition (HOSVD) and its application to a polarized source model. The second part concerns the derivation of tensor low-rank filters and detectors in a general low-rank tensor context. This work is based on a new definition of tensor rank and a new orthogonal tensor decomposition : the Alternative Unfolding HOSVD (AU-HOSVD). In the last part, these algorithms are applied to a particular radar configuration : the Space-Time Adaptive Process (STAP). This application illustrates the interest of tensor approach and algorithms based on AU-HOSVD
Cailly, Alexis. "Traitement du signal multidimensionnel pour les images hyperspectrales en présence d'objets de faibles dimensions spatiales." Ecole centrale de Marseille, 2012. http://www.theses.fr/2012ECDM0008.
Boizard, Maxime. "Développement et études de performances de nouveaux détecteurs/filtres rang faible dans des configurations RADAR multidimensionnelles." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00996967.
Marot, Julien. "Méthodes par sous-espaces et d'optimisation : application au traitement d'antenne, à l'analyse d'images, et au traitement de données tensorielles." Aix-Marseille 3, 2007. http://www.theses.fr/2007AIX30051.
This thesis is devoted to subspace-based and optimization methods, developed in array processing, image processing, tensor signal processing. Definitions concerning an array processing problem and high resolution methods are presented. We propose an optimization method applied to source detection in presence of phase distortions for a high number of sensors. We propose fast subspace-based methods for the estimation of straight line offset and orientation, several optimization methods to estimate distorted contours, nearly straight or circular. We provide a state-of-the art about multiway signal processing: truncation of the HOSVD, lower rank tensor approximation, multiway Wiener filtering. We propose a procedure for nonorthogonal tensor flattening, using the method presented in the first part
Bernicot, Frederic. "Contribution à l'étude des opérateurs multilinéaires et des espaces de Hardy." Phd thesis, Université Paris Sud - Paris XI, 2007. http://tel.archives-ouvertes.fr/tel-00199735.
Bernicot, Frédéric. "Contribution à l'étude des opérateurs multilinéaires et des espaces de Hardy." Paris 11, 2007. http://www.theses.fr/2007PA112197.
This thesis contains two independent parts. In the first one, we are interested in the study of bilinear operators. We dedicate the two first chapters, to describe "time-frequency" arguments aiming to get local estimates about these operators. Using these "off-diagonal" estimates, we mainly get the continuities of these bilinear operators on Lebesgue spaces and Sobolev spaces. At the end of the second chapter, we study a bilinear pseudo-differential calculus. The third chapter is about a geometrical study of these bilinear operators. To complete this work, in the fourth chapter, we study some various results, for example, we try to generalize our results to multi-dimensional variables. The second part is about the concept of "Hardy spaces". We define an abstract construction of new Hardy spaces. Then, comparing with the already known and studied Hardy spaces, we try to clear up the minimal conditions to keep the main properties of these spaces. So we also get a criterion in order to prove the H^1-L^1continuity of some operators. Then we take an interest in the study of intermediate spaces, got by interpolation between these new H^1 spaces and Lebesgue spaces. Finally, we use our abstract theory to solve the problem of L^p maximal regularity on evolution differential equations
Boudehane, Abdelhak. "Structured-joint factor estimation for high-order and large-scale tensors." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG085.
Multidimensional data sets and signals occupy an important place in recent application fields. Tensor decomposition represents a powerful mathematical tool for modeling multidimensional data and signals, without losing the interdimensional relations. The Canonical Polyadic (CP) model, a widely used tensor decomposition model, is unique up to scale and permutation indeterminacies. This property facilitates the physical interpretation, which has led the integration of the CP model in various contexts. The main challenge facing the tensor modeling is the computational complexity and memory requirements. High-order tensors represent a important issue, since the computational complexity and the required memory space increase exponentially with respect to the order. Another issue is the size of the tensor in the case of large-scale problems, which adds another burden to the complexity and memory. Tensor Networks (TN) theory is a promising framework, allowing to reduce high-order problems into a set of lower order problems. In particular, the Tensor-Train (TT) model, one of the TN models, is an interesting ground for dimensionality reduction. However, respresenting a CP tensor using a TT model, is extremely expensive in the case of large-scale tensors, since it requires full matricization of the tensor, which may exceed the memory capacity.In this thesis, we study the dimensionality reduction in the context of sparse-coding and high-order coupled tensor decomposition. Based on the results of Joint dImensionality Reduction And Factor rEtrieval (JIRAFE) scheme, we use the flexibility of the TT model to integrate the physical driven constraints and the prior knowledge on the factors, with the aim to reduce the computation time. For large-scale problems, we propose a scheme allowing to parallelize and randomize the different steps, i.e., the dimensionality reduction and the factor estimation. We also propose a grid-based strategy, allowing a full parallel processing for the case of very large scales and dynamic tensor decomposition
Gloaguen, Arnaud. "A statistical and computational framework for multiblock and multiway data analysis." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG016.
A challenging problem in multivariate statistics is to study relationships between several sets of variables measured on the same set of individuals. In the literature, this paradigm can be stated under several names as “learning from multimodal data”, “data integration”, “data fusion” or “multiblock data analysis”. Typical examples are found in a large variety of fields such as biology, chemistry, sensory analysis, marketing, food research, where the common general objective is to identify variables of each block that are active in the relationships with other blocks. Moreover, each block can be composed of a high number of measurements (~1M), which involves the computation of billion(s) of associations. A successful investigation of such a dataset requires developing a computational and statistical framework that fits both the peculiar structure of the data as well as its heterogeneous nature.The development of multivariate statistical methods constitutes the core of this work. All these developments find their foundations on Regularized Generalized Canonical Correlation Analysis (RGCCA), a flexible framework for multiblock data analysis that grasps in a single optimization problem many well known multiblock methods. The RGCCA algorithm consists in a single yet very simple update repeated until convergence. If this update is gifted with certain conditions, the global convergence of the procedure is guaranteed. Throughout this work, the optimization framework of RGCCA has been extended in several directions:(i) From sequential to global. We extend RGCCA from a sequential procedure to a global one by extracting all the block components simultaneously with a single optimization problem.(ii) From matrix to higher order tensors. Multiway Generalized Canonical Correlation Analysis (MGCCA) has been proposed as an extension of RGCCA to higher order tensors. Sequential and global strategies have been designed for extracting several components per block. The different variants of the MGCCA algorithm are globally convergent under mild conditions.(iii) From sparsity to structured sparsity. The core of the Sparse Generalized Canonical Correlation Analysis (SGCCA) algorithm has been improved. It provides a much faster globally convergent algorithm. SGCCA has been extended to handle structured sparse penalties.In the second part, the versatility and usefulness of the proposed methods have been investigated on various studies: (i) two imaging-genetic studies, (ii) two Electroencephalography studies and (iii) one Raman Microscopy study. For these analyses, the focus is made on the interpretation of the results eased by considering explicitly the multiblock, tensor and sparse structures
Girka, Fabien. "Development of new statistical/ML methods for identifying multimodal factors related to the evolution of Multiple Sclerosis." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG075.
Studying a given phenomenon under multiple views can reveal a more significant part of the mechanisms at stake rather than considering each view separately. In order to design a study under such a paradigm, measurements are usually acquired through different modalities resulting in multimodal/multiblock/multi-source data. One statistical framework suited explicitly for the joint analysis of such multi-source data is Regularized Generalized Canonical Correlation Analysis (RGCCA). RGCCA extracts canonical vectors and components that summarize the different views and their interactions. The contributions of this thesis are fourfold. (i) Improve and enrich the RGCCA R package to democratize its use. (ii) Extend the RGCCA framework to better handle tensor data by imposing a low-rank tensor factorization to the extracted canonical vectors. (iii) Propose and investigate simultaneous versions of RGCCA to get all canonical components at once. The proposed methods pave the way for new extensions of RGCCA. (iv) Use the developed tools and expertise to analyze multiple sclerosis and leukodystrophy data. A focus is made on identifying biomarkers differentiating between patients and healthy controls or between groups of patients
Cohen, Jérémy E. "Fouille de données tensorielles environnementales." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT054/document.
Among commonly used data mining techniques, few are those which are able to take advantage of the multiway structure of data in the form of a multiway array. In contrast, tensor decomposition techniques specifically look intricate processes underlying the data, where each of these processes can be used to describe all ways of the data array. The work reported in the following pages aims at incorporating various external knowledge into the tensor canonical polyadic decomposition, which is usually understood as a blind model. The first two chapters of this manuscript introduce tensor decomposition techniques making use respectively of a mathematical and application framework. In the third chapter, the many faces of constrained decompositions are explored, including a unifying framework for constrained decomposition, some decomposition algorithms, compression and dictionary-based tensor decomposition. The fourth chapter discusses the inclusion of subject variability modeling when multiple arrays of data are available stemming from one or multiple subjects sharing similarities. State of the art techniques are studied and expressed as particular cases of a more general flexible coupling model later introduced. The chapter ends on a discussion on dimensionality reduction when subject variability is involved, as well a some open problems
Degbey, Octavien. "Optimisation statique hiérarchisée des systèmes de grandes dimensions : Application à l'équilibrage de bilans de mesures." Nancy 1, 1987. http://www.theses.fr/1987NAN10158.
Selmi, Mouna. "Reconnaissance d’activités humaines à partir de séquences vidéo." Thesis, Evry, Institut national des télécommunications, 2014. http://www.theses.fr/2014TELE0029/document.
Human activity recognition (HAR) from video sequences is one of the major active research areas of computer vision. There are numerous application HAR systems, including video-surveillance, search and automatic indexing of videos, and the assistance of frail elderly. This task remains a challenge because of the huge variations in the way of performing activities, in the appearance of the person and in the variation of the acquisition conditions. The main objective of this thesis is to develop an efficient HAR method that is robust to different sources of variability. Approaches based on interest points have shown excellent state-of-the-art performance over the past years. They are generally related to global classification methods as these primitives are temporally and spatially disordered. More recent studies have achieved a high performance by modeling the spatial and temporal context of interest points by encoding, for instance, the neighborhood of the interest points over several scales. In this thesis, we propose a method of activity recognition based on a hybrid model Support Vector Machine - Hidden Conditional Random Field (SVM-HCRF) that models the sequential aspect of activities while exploiting the robustness of interest points in real conditions. We first extract the interest points and show their robustness with respect to the person's identity by a multilinear tensor analysis. These primitives are then represented as a sequence of local "Bags of Words" (BOW): The video is temporally fragmented using the sliding window technique and each of the segments thus obtained is represented by the BOW of interest points belonging to it. The first layer of our hybrid sequential classification system is a Support Vector Machine that converts each local BOW extracted from the video sequence into a vector of activity classes’ probabilities. The sequence of probability vectors thus obtained is used as input of the HCRF. The latter permits a discriminative classification of time series while modeling their internal structures via the hidden states. We have evaluated our approach on various human activity datasets. The results achieved are competitive with those of the current state of art. We have demonstrated, in fact, that the use of a low-level classifier (SVM) improves the performance of the recognition system since the sequential classifier HCRF directly exploits the semantic information from local BOWs, namely the probability of each activity relatively to the current local segment, rather than mere raw information from interest points. Furthermore, the probability vectors have a low-dimension which prevents significantly the risk of overfitting that can occur if the feature vector dimension is relatively high with respect to the training data size; this is precisely the case when using BOWs that generally have a very high dimension. The estimation of the HCRF parameters in a low dimension allows also to significantly reduce the duration of the HCRF training phase
Badreddine, Siwar. "Symétries et structures de rang faible des matrices et tenseurs pour des problèmes en chimie quantique." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS029.
This thesis presents novel numerical algorithms and conducts a comprehensive study of some existing numerical methods to address high-dimensional challenges arising from the resolution of the electronic Schrödinger equation in quantum chemistry. Focusing on two specific problems, our approach involves the identification and exploitation of symmetries and low-rank structures within matrices and tensors, aiming to mitigate the curse of dimensionality. The first problem considered in this thesis is the efficient numerical evaluation of the long-range component of the range-separated Coulomb potential and the long-range two-electron integrals 4th-order tensor which occurs in many quantum chemistry methods. We present two novel approximation methods. This is achieved by relying on tensorized Chebyshev interpolation, Gaussian quadrature rules combined with low-rank approximations as well as Fast Multipole Methods (FMM). This work offers a detailed explanation of these introduced approaches and algorithms, accompanied by a thorough comparison between the newly proposed methods. The second problem of interest is the exploitation of symmetries and low-rank structures to derive efficient tensor train representations of operators involved in the Density Matrix Renormalization Group (DMRG) algorithm. This algorithm, referred to as the Quantum Chemical DMRG (QC-DMRG) when applied in the field of quantum chemistry, is an accurate iterative optimization method employed to numerically solve the time-independent Schrödinger equation. This work aims to understand and interpret the results obtained from the physics and chemistry communities and seeks to offer novel theoretical insights that, to the best of our knowledge, have not received significant attention before. We conduct a comprehensive study and provide demonstrations, when necessary, to explore the existence of a particular block-sparse tensor train representation of the Hamiltonian operator and its associated eigenfunction. This is achieved while maintaining physical conservation laws, manifested as group symmetries in tensors, such as the conservation of the particle number. The third part of this work is dedicated to the realization of a proof-of-concept Quantum Chemical DMRG (QC-DMRG) Julia library, designed for the quantum chemical Hamiltonian operator model. We exploit here the block-sparse tensor train representation of both the operator and the eigenfunction. With these structures, our goal is to speed up the most time-consuming steps in QC-DMRG, including tensor contractions, matrix-vector operations, and matrix compression through truncated Singular Value Decompositions (SVD). Furthermore, we provide empirical results from various molecular simulations, while comparing the performance of our library with the state-of-the-art ITensors library where we show that we attain a similar performance
Salameh, Farah. "Méthodes de modélisation statistique de la durée de vie des composants en génie électrique." Phd thesis, Toulouse, INPT, 2016. http://oatao.univ-toulouse.fr/16622/1/Salameh_Farah.pdf.
Roux-Langlois, Adeline. "Lattice - Based Cryptography - Security Foundations and Constructions." Thesis, Lyon, École normale supérieure, 2014. http://www.theses.fr/2014ENSL0940/document.
Lattice-based cryptography is a branch of cryptography exploiting the presumed hardness of some well-known problems on lattices. Its main advantages are its simplicity, efficiency, and apparent security against quantum computers. The principle of the security proofs in lattice-based cryptography is to show that attacking a given scheme is at least as hard as solving a particular problem, as the Learning with Errors problem (LWE) or the Small Integer Solution problem (SIS). Then, by showing that those two problems are at least as hard to solve than a hard problem on lattices, presumed polynomial time intractable, we conclude that the constructed scheme is secure.In this thesis, we improve the foundation of the security proofs and build new cryptographic schemes. We study the hardness of the SIS and LWE problems, and of some of their variants on integer rings of cyclotomic fields and on modules on those rings. We show that there is a classical hardness proof for the LWE problem (Regev's prior reduction was quantum), and that the module variants of SIS and LWE are also hard to solve. We also give two new lattice-based group signature schemes, with security based on SIS and LWE. One is the first lattice-based group signature with logarithmic signature size in the number of users. And the other construction allows another functionality, verifier-local revocation. Finally, we improve the size of some parameters in the work on cryptographic multilinear maps of Garg, Gentry and Halevi in 2013
Lepoint, Tancrède. "Conception and implémentation de cryptographie à base de réseaux." Phd thesis, Ecole Normale Supérieure de Paris - ENS Paris, 2014. http://tel.archives-ouvertes.fr/tel-01069864.
Caland, Fabrice. "Décomposition tensorielle de signaux luminescents émis par des biosenseurs bactériens pour l'identification de Systèmes Métaux-Bactéries." Phd thesis, Université de Lorraine, 2013. http://tel.archives-ouvertes.fr/tel-00934507.
Nérot, Agathe. "Modélisation géométrique du corps humain (externe et interne) à partir des données externes." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1133.
Digital human models have become instrumental tools in the analysis of posture and motion in many areas of biomechanics, including ergonomics and clinical settings. These models include a geometric representation of the body surface and an internal linkage composed of rigid segments and joints allowing simulation of human movement. The customization of human models first starts with the adjustment of external anthropometric dimensions, which are then used as input data to the adjustment of internal skeletal segments lengths. While the external data points are more readily measurable using current 3D scanning tools, the scientific challenge is to predict the characteristic points of the internal skeleton from external data only. The Institut de Biomécanique Humaine Georges Charpak (Arts et Métiers ParisTech) has developed 3D reconstruction methods of bone and external envelope from biplanar radiographs obtained from the EOS system (EOS Imaging, Paris), a low radiation dose technology. Using this technology, this work allowed proposing new external-internal statistical relationships to predict points of the longitudinal skeleton, particularly the complete set of spine joint centers, from a database of 80 subjects. The implementation of this work could improve the realism of current digital human models used for biomechanical analysis requiring information of joint center location, such as the estimation of range of motion and joint loading
Luciani, Xavier. "Analyse numérique des spectres de fluorescence 3D issus de mélanges non linéaires." Phd thesis, Université du Sud Toulon Var, 2007. http://tel.archives-ouvertes.fr/tel-00287255.
Sellami, Akrem. "Interprétation sémantique d'images hyperspectrales basée sur la réduction adaptative de dimensionnalité." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0037/document.
Hyperspectral imagery allows to acquire a rich spectral information of a scene in several hundred or even thousands of narrow and contiguous spectral bands. However, with the high number of spectral bands, the strong inter-bands spectral correlation and the redundancy of spectro-spatial information, the interpretation of these massive hyperspectral data is one of the major challenges for the remote sensing scientific community. In this context, the major challenge is to reduce the number of unnecessary spectral bands, that is, to reduce the redundancy and high correlation of spectral bands while preserving the relevant information. Therefore, projection approaches aim to transform the hyperspectral data into a reduced subspace by combining all original spectral bands. In addition, band selection approaches attempt to find a subset of relevant spectral bands. In this thesis, firstly we focus on hyperspectral images classification attempting to integrate the spectro-spatial information into dimension reduction in order to improve the classification performance and to overcome the loss of spatial information in projection approaches.Therefore, we propose a hybrid model to preserve the spectro-spatial information exploiting the tensor model in the locality preserving projection approach (TLPP) and to use the constraint band selection (CBS) as unsupervised approach to select the discriminant spectral bands. To model the uncertainty and imperfection of these reduction approaches and classifiers, we propose an evidential approach based on the Dempster-Shafer Theory (DST). In the second step, we try to extend the hybrid model by exploiting the semantic knowledge extracted through the features obtained by the previously proposed approach TLPP to enrich the CBS technique. Indeed, the proposed approach makes it possible to select a relevant spectral bands which are at the same time informative, discriminant, distinctive and not very redundant. In fact, this approach selects the discriminant and distinctive spectral bands using the CBS technique injecting the extracted rules obtained with knowledge extraction techniques to automatically and adaptively select the optimal subset of relevant spectral bands. The performance of our approach is evaluated using several real hyperspectral data
Zniyed, Yassine. "Breaking the curse of dimensionality based on tensor train : models and algorithms." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS330.
Massive and heterogeneous data processing and analysis have been clearly identified by the scientific community as key problems in several application areas. It was popularized under the generic terms of "data science" or "big data". Processing large volumes of data, extracting their hidden patterns, while preforming prediction and inference tasks has become crucial in economy, industry and science.Treating independently each set of measured data is clearly a reductiveapproach. By doing that, "hidden relationships" or inter-correlations between thedatasets may be totally missed. Tensor decompositions have received a particular attention recently due to their capability to handle a variety of mining tasks applied to massive datasets, being a pertinent framework taking into account the heterogeneity and multi-modality of the data. In this case, data can be arranged as a D-dimensional array, also referred to as a D-order tensor.In this context, the purpose of this work is that the following properties are present: (i) having a stable factorization algorithms (not suffering from convergence problems), (ii) having a low storage cost (i.e., the number of free parameters must be linear in D), and (iii) having a formalism in the form of a graph allowing a simple but rigorous mental visualization of tensor decompositions of tensors of high order, i.e., for D> 3.Therefore, we rely on the tensor train decomposition (TT) to develop new TT factorization algorithms, and new equivalences in terms of tensor modeling, allowing a new strategy of dimensionality reduction and criterion optimization of coupled least squares for the estimation of parameters named JIRAFE.This methodological work has had applications in the context of multidimensional spectral analysis and relay telecommunications systems
Mbuntcha, Wuntcha Calvin. "Optimisation quadratique en variables binaires : quelques résultats et techniques." Thèse, 2009. http://hdl.handle.net/1866/6625.