Tesis sobre el tema "Noyaux de produit scalaire"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 16 mejores tesis para su investigación sobre el tema "Noyaux de produit scalaire".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Wacker, Jonas. "Random features for dot product kernels and beyond". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS241.
Texto completoDot product kernels, such as polynomial and exponential (softmax) kernels, are among the most widely used kernels in machine learning, as they enable modeling the interactions between input features, which is crucial in applications like computer vision, natural language processing, and recommender systems. However, a fundamental drawback of kernel-based statistical models is their limited scalability to a large number of inputs, which requires resorting to approximations. In this thesis, we study techniques to linearize kernel-based methods by means of random feature approximations and we focus on the approximation of polynomial kernels and more general dot product kernels to make these kernels more useful in large scale learning. In particular, we focus on a variance analysis as a main tool to study and improve the statistical efficiency of such sketches
Beaucoup, Franck. "Estimations quantitatives sur les polynômes liées à la répartition des racines". Lyon 1, 1994. http://www.theses.fr/1994LYO19004.
Texto completoBoukary, Baoua Oumarou. "Application du produit star de Berezin à l'étude des paires de Gelfand résolubles". Metz, 2000. http://docnum.univ-lorraine.fr/public/UPV-M/Theses/2000/Boukary_Baoua.Oumarou.SMZ0001.pdf.
Texto completoMacovschi, Stefan Eugen. "Estimations quantitatives sur les polynômes pour différentes normes hilbertiennes". Lyon 1, 1998. http://www.theses.fr/1998LYO10197.
Texto completoFeiz, Amir-Ali. "Simulations des transferts turbulents dans une conduite cylindrique en rotation". Marne-la-Vallée, 2006. http://www.theses.fr/2006MARN0281.
Texto completoLe, Calvez Caroline. "Accélération de méthodes de Krylov pour la résolution de systèmes linéaires creux sur machines parallèles". Lille 1, 1998. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/1998/50376-1998-225.pdf.
Texto completoDicuangco, Lilibeth. "On duadic codes and split group codes". Nice, 2006. http://www.theses.fr/2006NICE4098.
Texto completoFlorent, Alice. "Contraindre les distributions de partons dans les noyaux grâce au boson W produit dans les collisions pPb à 5,02 TeV avec CMS". Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112339/document.
Texto completoMeasurements of W bosons produced in pPb collisions at nucleon-nucleon centre-of-mass energy $\sqrt{s\rm{_{NN}}}=5.02$ TeV are presented in the muon plus neutrino decay channel. The data sample of 34.6 nb-1 integrated luminosity was collected by the CMS detector. The W boson differential cross sections, lepton-charge asymmetry and forward/backward asymmetry are computed as a function of the lepton pseudorapidity, for muons of transverse momentum higher than 25 GeV/$c$. These observables are compared to two sets of parton distributions (PDF). One of two assumes nuclear modifications (EPS09) while the other is simply a superposition of free proton PDF CT10). Some of the observables deviate from expectations based on unmodified and currently available nuclear PDF. One in particular slightly deviates from both predictions which may indicates dependence of nuclear PDF as a function of the valence quark flavor
BRAGHIN, FABIO LUIS. "I. Les ondes de isospin dans la matiere nucleaire chaude les resonances geantes dans les noyaux. Ii. Quelques aspects de l'evolution temporelle des fluctuations quantiques d'un champ scalaire en auto interaction". Paris 11, 1995. http://www.theses.fr/1995PA112515.
Texto completoBiedinger, Jean-Marie. "Contribution à l'étude de la diffusion du champ électromagnétique dans le fer massif : application à l'analyse d'un moteur asynchrone à rotor massif (M.A.R.M.)". Compiègne, 1986. http://www.theses.fr/1986COMPE061.
Texto completoThe present study concerns the analysis of the electromagnetic field diffusion in solid iron, applied to electromechanical energy converters. It comprises two principal parts : the review of the general macroscopic equations and their various mathematical formulations ; the finite element formulation and its comparison to experimental results obtained on a 1 MW - 20 000 rpm Solid Rotor Induction Machine (S. R. I. M. ). The first part begins with the analysis of the tridimensional character of the electromagnetic problems in a S. R. I. M. , in order to deduce a coherent formulation based on field potentials. Then it recalls the different variational formulations of these equations in terms of various potentials : magnetic vector potential and electric scalar potential inside the conducting region ; magnetic scalar potential in the non-conducting region ; reduced magnetic scalar potential inside the current source region. This model is able to take into account the eddy currents and the magnetic saturation in the conducting moving parts, with a minimum set of degrees of freedom. Other possible simplifications are examinated, according to the experience. The second part is allowed to the numerical implementation using the finite element method. First, simplified two-dimensional models are developed. Then the use of the propagation boundary conditions, well adapted to the nature of the rotating field in a polyphased machine, and of the equivalent sinusoidal time-varying model, authorises a tridimensional computation. The application of these models to the S. R. I. M. Allows to study particular configurations of the unlaminated rotor (as axial slits, squirrel-cage, circumferential grooves), or the effects of the finite length for example
Bourse, Florian. "Functional encryption for inner-product evaluations". Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE067/document.
Texto completoFunctional encryption is an emerging framework in which a master authority can distribute keys that allow some computation over encrypted data in a controlled manner. The trend on this topic is to try to build schemes that are as expressive possible, i.e., functional encryption that supports any circuit evaluation. These results are at the cost of efficiency and security. They rely on recent, not very well studied assumptions, and no construction is close to being practical. The goal of this thesis is to attack this challenge from a different angle: we try to build the most expressive functional encryption scheme we can get from standard assumption, while keeping the constructions simple and efficient. To this end, we introduce the notion of functional encryption for inner-product evaluations, where plaintexts are vectors ~x, and the trusted authority delivers keys for vectors ~y that allow the evaluation of the inner-product h~x, ~yi. This functionality already offers some direct applications, and it can also be used for theoretical constructions, as inner-product is a widely used operation. Finally, we present two generic frameworks to construct inner-product functional encryption schemes, as well as some concrete instantiations whose security relies on standard assumptions. We also compare their pros and cons
Ibarrondo, Luis Alberto. "Privacy-preserving biometric recognition systems with advanced cryptographic techniques". Electronic Thesis or Diss., Sorbonne université, 2023. https://theses.hal.science/tel-04058954.
Texto completoDealing with highly sensitive data, identity management systems must provide adequate privacy protection as they leverage biometrics technology. Wielding Multi-Party Computation (MPC), Homomorphic Encryption (HE) and Functional Encryption (FE), this thesis tackles the design and implementation of practical privacy-preserving biometric systems, from the feature extraction to the matching with enrolled users. This work is consecrated to the design of secure biometric solutions for multiple scenarios, putting special care to balance accuracy and performance with the security guarantees, while improving upon existing works in the domain. We go beyond privacy preservation against semi-honest adversaries by also ensuring correctness facing malicious adversaries. Lastly, we address the leakage of biometric data when revealing the output, a privacy concern often overlooked in the literature. The main contributions of this thesis are: • A new face identification solution built on FE-based private inner product matching mitigating input leakage. • A novel efficient two-party computation protocol, Funshade, to preserve the privacy of biometric thresholded distance metric operations. • An innovative method to perform privacy-preserving biometric identification based on the notion of group testing named Grote. • A new distributed decryption protocol with collaborative masking addressing input leakage, dubbed Colmade. • An honest majority three-party computation protocol, Banners, to perform maliciously secure inference of Binarized Neural Networks. • A HE Python library named Pyfhel, offering a high-level abstraction and low-level functionalities, with applications in teaching
Tucker, Ida. "Chiffrement fonctionnel et signatures distribuées fondés sur des fonctions de hachage à projection, l'apport des groupes de classe". Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN054.
Texto completoOne of the current challenges in cryptographic research is the development of advanced cryptographic primitives ensuring a high level of confidence. In this thesis, we focus on their design, while proving their security under well-studied algorithmic assumptions.My work grounds itself on the linearity of homomorphic encryption, which allows to perform linear operations on encrypted data. Precisely, I built upon the linearly homomorphic encryption scheme introduced by Castagnos and Laguillaumie at CT-RSA'15. Their scheme possesses the unusual property of having a prime order plaintext space, whose size can essentially be tailored to ones' needs. Aiming at a modular approach, I designed from their work technical tools (projective hash functions, zero-knowledge proofs of knowledge) which provide a rich framework lending itself to many applications.This framework first allowed me to build functional encryption schemes; this highly expressive primitive allows a fine grained access to the information contained in e.g., an encrypted database. Then, in a different vein, but from these same tools, I designed threshold digital signatures, allowing a secret key to be shared among multiple users, so that the latter must collaborate in order to produce valid signatures. Such signatures can be used, among other applications, to secure crypto-currency wallets. Significant efficiency gains, namely in terms of bandwidth, result from the instantiation of these constructions from class groups. This work is at the forefront of the revival these mathematical objects have seen in cryptography over the last few years
Xiong, Hao. "Modélisation multiscalaire de matériaux granulaires en application aux problèmes d'ingénierie géotechnique". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAI096/document.
Texto completoGranular materials exhibit a wide spectrum of constitutive features when submitted under various loading paths. Developing constitutive models which succeed in accounting for these features has been challenged by scientists for decades. A promising direction for achieving this can be the multi-scale approach. Through this approach, the constitutive model is formulated by relating material’s macroscopic properties to their corresponding microstructure properties.This thesis proposes a three-dimensional micro-mechanical model (the so-called 3D-H model) taking into account an intermediate scale (meso-scale) which makes it possible to describe a variety of constitutive features in a natural way. The comparison between experimental tests and numerical simulations reveals the predictive capability of this model. Particularly, several simulations are carried out with different confining pressures and initial void ratios, based on the fact that the critical state is quantitatively described without requiring any critical state formulations and parameter. The model is also analyzed from a microscopic view, wherein the evolution of some key microscopic parameters is investigated.Then, a 3D multi-scale approach is presented to investigate the mechanical behavior of a macroscopic specimen consisting of a granular assembly, as a boundary value problem. The core of this approach is a multiscale coupling, wherein the finite element method is used to solve a boundary value problem and the 3D-H model is employed to build the micro constitutive relationship used at a representative volume element scale. This approach provides a convenient way to link the macroscopic observations with intrinsic microscopic mechanisms. Plane-strain biaxial loading conditions are selected to simulate the occurrence of strain localization. A series of tests are performed, wherein distinct failure patterns are observed and analyzed. A system of shear band naturally appears in a homogeneous setting specimen. By defining the shear band area, microstructural mechanisms are separately investigated inside and outside the shear band. Moreover, a second-order work directional analysis is performed by applying strain probes at different stress-strain states along drained biaxial loading paths. The normalized second order work introduced as an indicator of an unstable trend of the system is analyzed not only on the macroscale but also on the microscale.Finally, a second order work analysis in application to geotechnical problems by using the aforementioned multiscale approach is presented. The multiscale approach is used to simulate a homogeneous and a non-homogeneous BVP, opening a road to interpret and understand the micro mechanisms hiding behind the occurrence of failure in geotechnical issues. This multiscale approach utilizes an explicit-dynamic integral method so that the post-peak failure can be investigated without requiring over-sophisticated mathematical ingredients. Thus, by switching the loading method from a strain control to a stress control at the limit state, the collapse of the system can be reflected in an abrupt increase of kinetic energy, stemming from the difference between both internal and external second-order works
Ben, Kahla Haithem. "Sur des méthodes préservant les structures d'une classe de matrices structurées". Thesis, Littoral, 2017. http://www.theses.fr/2017DUNK0463/document.
Texto completoThe classical linear algebra methods, for calculating eigenvalues and eigenvectors of a matrix, or lower-rank approximations of a solution, etc....do not consider the structures of matrices. Such structures are usually destroyed in the numerical process. Alternative structure-preserving methods are the subject of an important interest mattering to the community. This thesis establishes a contribution in this field. The SR decomposition is usually implemented via the symplectic Gram-Schmidt algorithm. As in the classical case, a loss of orthogonality can occur. To remedy this, we have proposed two algorithms RSGSi and RMSGSi, where the reorthogonalization of a current set of vectors against the previously computed set is performed twice. The loss of J-orthogonality has significantly improved. A direct rounding error analysis of symplectic Gram-Schmidt algorithm is very hard to accomplish. We managed to get around this difficulty and give the error bounds on the loss of the J-orthogonality and on the factorization. Another way to implement the SR decomposition is based on symplectic Householder transformations. An optimal choice of free parameters provided an optimal version of the algorithm SROSH. However, the latter may be subject to numerical instability. We have proposed a new modified version SRMSH, which has the advantage of being numerically more stable. By a detailes study, we are led to two new variants numerically more stables : SRMSH and SRMSH2. In order to build a SR algorithm of complexity O(n³), where 2n is the size of the matrix, a reduction to the condensed matrix form (upper J-Hessenberg form) via adequate similarities is crucial. This reduction may be handled via the algorithm JHESS. We have shown that it is possible to perform a reduction of a general matrix, to an upper J-Hessenberg form, based only on the use of symplectic Householder transformations. The new algorithm, which will be called JHSH algorithm, is based on an adaptation of SRSH algorithm. We are led to two news variants algorithms JHMSH and JHMSH2 which are significantly more stable numerically. We found that these algortihms behave quite similarly to JHESS algorithm. The main drawback of all these algorithms (JHESS, JHMSH, JHMSH2) is that they may encounter fatal breakdowns or may suffer from a severe form of near-breakdowns, causing a brutal stop of the computations, the algorithm breaks down, or leading to a serious numerical instability. This phenomenon has no equivalent in the Euclidean case. We sketch out a very efficient strategy for curing fatal breakdowns and treating near breakdowns. Thus, the new algorithms incorporating this modification will be referred to as MJHESS, MJHSH, JHM²SH and JHM²SH2. These strategies were then incorporated into the implicit version of the SR algorithm to overcome the difficulties encountered by the fatal breakdown or near-breakdown. We recall that without these strategies, the SR algorithms breaks. Finally ans in another framework of structured matrices, we presented a robust algorithm via FFT and a Hankel matrix, based on computing approximate greatest common divisors (GCD) of polynomials, for solving the problem pf blind image deconvolution. Specifically, we designe a specialized algorithm for computing the GCD of bivariate polynomials. The new algorithm is based on the fast GCD algorithm for univariate polynomials , of quadratic complexity O(n²) flops. The complexitiy of our algorithm is O(n²log(n)) where the size of blurred images is n x n. The experimental results with synthetically burred images are included to illustrate the effectiveness of our approach
Côté, Hugo. "Programmes de branchement catalytiques : algorithmes et applications". Thèse, 2018. http://hdl.handle.net/1866/22123.
Texto completo