Dissertations / Theses on the topic 'Estimation sous contraintes'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 30 dissertations / theses for your research on the topic 'Estimation sous contraintes.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
LANGLAIS, VALERIE. "Estimation sous contraintes d'inegalites." Paris, ENMP, 1990. http://www.theses.fr/1990ENMP0260.
Full textCabral, Farias Rodrigo. "Estimation sous contraintes de communication : algorithmes et performances asymptotiques." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENT024/document.
Full textWith recent advances in sensing and communication technology, sensor networks have emerged as a new field in signal processing. One of the applications of his new field is remote estimation, where the sensors gather information and send it to some distant point where estimation is carried out. For overcoming the new design challenges brought by this approach (constrained energy, bandwidth and complexity), quantization of the measurements can be considered. Based on this context, we study the problem of estimation based on quantized measurements. We focus mainly on the scalar location parameter estimation problem, the parameter is considered to be either constant or varying according to a slow Wiener process model. We present estimation algorithms to solve this problem and, based on performance analysis, we show the importance of quantizer range adaptiveness for obtaining optimal performance. We propose a low complexity adaptive scheme that jointly estimates the parameter and updates the quantizer thresholds, achieving in this way asymptotically optimal performance. With only 4 or 5 bits of resolution, the asymptotically optimal performance for uniform quantization is shown to be very close to the continuous measurement estimation performance. Finally, we propose a high resolution approach to obtain an approximation of the optimal nonuniform quantization thresholds for parameter estimation and also to obtain an analytical approximation of the estimation performance based on quantized measurements
Cabral, farias Rodrigo. "Estimation sous contraintes de communication : algorithmes et performances asymptotiques." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00877073.
Full textCiuca, Marian. "Estimation non-paramétrique sous contraintes. Applications en finance stochastique." Aix-Marseille 1, 2003. http://www.theses.fr/2003AIX11018.
Full textIn financial mathematics, when using the celebrated Black-Scholes formula one can be asked to nonparametrically estimate the volatility function in a one-side manner: the estimator has to be always grater than, or equal to, the estimated function. In the first part, working over Besov smoothenss classes, we construct wavelet linear and non-linear estimators of the diffusion coefficient of a diffusion process, using the sup-norm as a quality criterion for an estimator, and compute their convergence rates, in a minimax, and respectively adaptive, context; then we construct an asymptotically one-sided diffusion coefficient estimator and compute its minimax convergence rate. In the second part we study the one-side estimation problem in the Gaussian white noise model, and exhibit the minimax convergence rate for this constrained nonparametric estimation problem, proving lower and upper bound results. In the third part, we prove that our volatility estimators yield Black-Scholes asymptotically replicating, super-replicating and sub-replicating strategies. The last part presents our estimators from an applied point of view, by means of numerical simulations
Ouassou, Idir. "Estimation sous contraintes pour des lois à symétrie sphérique." Rouen, 1999. http://www.theses.fr/1999ROUES031.
Full textYounes, Hassan. "Estimation du taux de mortalité sous contraintes d'ordre pour des données censurées ou tronquées /." Montréal : Université du Québec à Montréal, 2005. http://accesbib.uqam.ca/cgi-bin/bduqam/transit.pl?&noMan=24065971.
Full textCousin, Jean-Gabriel. "Methodologies de conception de coeurs de processeurs specifiques (asip) : mise en oeuvre sous contraintes, estimation de la consommation." Rennes 1, 1999. http://www.theses.fr/1999REN10085.
Full textMaatouk, Hassan. "Correspondance entre régression par processus Gaussien et splines d'interpolation sous contraintes linéaires de type inégalité. Théorie et applications." Thesis, Saint-Etienne, EMSE, 2015. http://www.theses.fr/2015EMSE0791/document.
Full textThis thesis is dedicated to interpolation problems when the numerical function is known to satisfy some properties such as positivity, monotonicity or convexity. Two methods of interpolation are studied. The first one is deterministic and is based on convex optimization in a Reproducing Kernel Hilbert Space (RKHS). The second one is a Bayesian approach based on Gaussian Process Regression (GPR) or Kriging. By using a finite linear functional decomposition, we propose to approximate the original Gaussian process by a finite-dimensional Gaussian process such that conditional simulations satisfy all the inequality constraints. As a consequence, GPR is equivalent to the simulation of a truncated Gaussian vector to a convex set. The mode or Maximum A Posteriori is defined as a Bayesian estimator and prediction intervals are quantified by simulation. Convergence of the method is proved and the correspondence between the two methods is done. This can be seen as an extension of the correspondence established by [Kimeldorf and Wahba, 1971] between Bayesian estimation on stochastic process and smoothing by splines. Finally, a real application in insurance and finance is given to estimate a term-structure curve and default probabilities
Lobjois, Lionel. "Problèmes d'optimisation combinatoire sous contraintes : vers la conception automatique de méthodes de résolution adaptées à chaque instance." Toulouse, ENSAE, 1999. http://www.theses.fr/1999ESAE0025.
Full textZebadúa, Augusto. "Traitement du signal dans le domaine compressé et quantification sur un bit : deux outils pour les contextes sous contraintes de communication." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAT085/document.
Full textMonitoring physical phenomena by using a network of sensors (autonomous but interconnected) is highly constrained in energy consumption, mainly for data transmission. In this context, this thesis proposes signal processing tools to reduce communications without compromising computational accuracy in subsequent calculations. The complexity of these methods is reduced, so as to consume only little additional energy. Our two building blocks are compression during signal acquisition (Compressive Sensing) and CoarseQuantization (1 bit). We first study the Compressed Correlator, an estimator which allows for evaluating correlation functions, time-delay, and spectral densities directly from compressed signals. Its performance is compared with the usual correlator. As we show, if the signal of interest has limited frequency content, the proposed estimator significantly outperforms theconventional correlator. Then, inspired by the coarse quantization correlators from the 50s and 60s, two new correlators are studied: The 1-bit Compressed and the Hybrid Compressed, which can also outperform their uncompressed counterparts. Finally, we show the applicability of these methods in the context of interest through the exploitation of real data
Luong, Marie. "Conception optimale de l'architecture d'un systeme d'instrumentation sous contraintes de diagnostic, de fiabilité et de disponibilité." Vandoeuvre-les-Nancy, INPL, 1996. http://www.theses.fr/1996INPL155N.
Full textPages, Gaël. "Estimation de la posture d'un sujet paraplégique en vue d'une rééducation des membres inférieurs sous stimulation électrique fonctionnelle." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2006. http://tel.archives-ouvertes.fr/tel-00123102.
Full textVenturino, Antonello. "Constrained distributed state estimation for surveillance missions using multi-sensor multi-robot systems." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPAST118.
Full textDistributed algorithms have pervaded many aspects of control engineering with applications for multi-robot systems, sensor networks, covering topics such as control, state estimation, fault detection, cyber-attack detection and mitigation on cyber-physical systems, etc. Indeed, distributed schemes face problems like scalability and communication between agents. In multi-agent systems applications (e.g. fleet of mobile robots, sensor networks) it is now common to design state estimation algorithms in a distributed way so that the agents can accomplish their tasks based on some shared information within their neighborhoods. In surveillance missions, a low-cost static Sensor Network (e.g. with cameras) could be deployed to localize in a distributed way intruders in a given area. In this context, the main objective of this work is to design distributed observers to estimate the state of a dynamic system (e.g. a multi-robot system) that efficiently handle constraints and uncertainties but with reduced computation load. This PhD thesis proposes new Distributed Moving Horizon Estimation (DMHE) algorithms with a Luenberger pre-estimation in the formulation of the local problem solved by each sensor, resulting in a significant reduction of the computation time, while preserving the estimation accuracy. Moreover, this manuscript proposes a consensus strategy to enhance the convergence time of the estimates among sensors while dealing with weak unobservability conditions (e.g. vehicles not visible by some cameras). Another contribution concerns the improvement of the convergence of the estimation error by mitigating unobservability issues by using a l-step neighborhood information spreading mechanism. The proposed distributed estimation is designed for realistic large-scale systems scenarios involving sporadic measurements (i.e. available at time instants a priori unknown). To this aim, constraints on measurements (e.g. camera field of view) are embodied using time-varying binary parameters in the optimization problem. Both realistic simulations within the Robot Operating System (ROS) framework and Gazebo environment, as well as experimental validation of the proposed DMHE localization technique of a Multi-Vehicle System (MVS) with ground mobile robots are performed, using a static Sensor Network composed of low-cost cameras which provide measurements on the positions of the robots of the MVS. The proposed algorithms are compared to previous results from the literature, considering several metrics such as computation time and accuracy of the estimates
Gu, Yi. "Estimation sous contrainte et déconvolution autodidacte." Paris 11, 1989. http://www.theses.fr/1989PA112063.
Full textLahbib, Insaf. "Contribution à l'analyse des effets de vieillissement de composants actifs et de circuits intégrés sous contraintes DC et RF en vue d'une approche prédictive." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMC256.
Full textThe work of this thesis focuses on the simulation of the electrical parameters degradation of MOS and bipolar transistors under static and dynamic stresses. This study was conducted using an in-house reliability simulation tool. According to the MOS or bipolar technology, the studied mechanisms were successively: Hot Carrier Injection, Bias Temperature instability, Mixed Mode and Reverse base emitter bias. The investigation was then extended to circuit-level. The effect of transistors degradation on a ring oscillator frequency and the RF performances of a low noise amplifier were investigated. The circuits were subjected to DC, AC and RF constraints. Predictability of these degradations has been validated by experimental aging tests on encapsulated and PCB-mounted demonstrators. The results of these studies proved the accuracy of the simulator and validated the quasi-static calculation method used to predict the degradation under dynamic stress. The goal of this research is to embed this predictive approach into a circuit design flow to ensure its reliability
Terreaux, Eugénie. "Théorie des Matrices Aléatoires pour l'Imagerie Hyperspectrale." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC091/document.
Full textHyperspectral imaging generates large data due to the spectral and spatial high resolution, as it is the case for more and more other kinds of applications. For hyperspectral imaging, the data complexity comes from the spectral and spatial heterogeneity, the non-gaussianity of the noise and other physical processes. Nevertheless, this complexity enhances the wealth of collected informations, that need to be processed with adapted methods. Random matrix theory and robust processes are here suggested for hyperspectral imaging application: the random matrix theory is adapted to large data and the robustness enables to better take into account the non-gaussianity of the data. This thesis aims to enhance the model order selection on a hyperspectral image and the unmixing problem. As the model order selection is concerned, three new algorithms are developped, and the last one, more robust, gives better performances. One financial application is also presented. As for the unmixing problem, three methods that take into account the peculierities of hyperspectral imaging are suggested. The random matrix theory is of great interest for hyperspectral image processing, as demonstrated in this thesis. Differents methods developped here can be applied to other field of signal processing requiring the processing of large data
Sircoulomb, Vincent. "Etude des concepts de filtrage robuste aux méconnaissances de modèle et aux pertes de mesures. Application aux systèmes de navigation." Phd thesis, Institut National Polytechnique de Lorraine - INPL, 2008. http://tel.archives-ouvertes.fr/tel-00350451.
Full textBennani, Youssef. "Caractérisation de la diversité d'une population à partir de mesures quantifiées d'un modèle non-linéaire. Application à la plongée hyperbare." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4128/document.
Full textThis thesis proposes a new method for nonparametric density estimation from censored data, where the censing regions can have arbitrary shape and are elements of partitions of the parametric domain. This study has been motivated by the need for estimating the distribution of the parameters of a biophysical model of decompression, in order to be able to predict the risk of decompression sickness. In this context, the observations correspond to quantified counts of bubbles circulating in the blood of a set of divers having explored a variety of diving profiles (depth, duration); the biophysical model predicts of the gaz volume produced along a given diving profile for a diver with known biophysical parameters. In a first step, we point out the limitations of the classical nonparametric maximum-likelihood estimator. We propose several methods for its calculation and show that it suffers from several problems: in particular, it concentrates the probability mass in a few regions only, which makes it inappropriate to the description of a natural population. We then propose a new approach relying both on the maximum-entropy principle, in order to ensure a convenient regularity of the solution, and resorting to the maximum-likelihood criterion, to guarantee a good fit to the data. It consists in searching for the probability law with maximum entropy whose maximum deviation from empirical averages is set by maximizing the data likelihood. Several examples illustrate the superiority of our solution compared to the classic nonparametric maximum-likelihood estimator, in particular concerning generalisation performance
Sircoulomb, Vincent. "Étude des concepts de filtrage robuste aux méconnaissances de modèles et aux pertes de mesures. Application aux systèmes de navigation." Thesis, Vandoeuvre-les-Nancy, INPL, 2008. http://www.theses.fr/2008INPL093N/document.
Full textTo solve the problem of estimating the state of a system, it is necessary to have at one's disposal a model governing the dynamic of the state variables and to measure directly or indirectly all or a part of these variables. The work presented in this thesis deals with the estimation issue in the presence of model uncertainties and sensor losses. The first part of this work represents the synthesis of a state estimation device for nonlinear systems. It consists in selecting a state estimator and properly tuning it. Then, thanks to a criterion introduced for the occasion, it consists in algorithmically designing a hardware redundancy aiming at compensating for some sensor losses. The second part of this work deals with the conception of a sub-model compensating for some model uncertainties. This sub-model, designed by using the Allan variance, is usable by a Kalman filter. This work has been used to take into account some gyroscopical drifts in a GPS-INS integrated navigation based on a constrained Kalman filter. The results obtained, coming from experiments on two plane trajectories, showed a safe and robust behaviour of the proposed method
Borloz, Bruno. "Estimation, détection, classification par maximisation du rapport signal-à-bruit : le filtre adapté stochastique sous contrainte." Toulon, 2005. http://www.theses.fr/2005TOUL0001.
Full textDetection and classification problems of random signals (transient, textures. . . )have great importance. The constrained matched filter aims at maximizing the signal-to-noise ratio in a subspace whose dimension is given a priori to reach these objectives. It is an extensision of the stochastic matched filter and of the matched filter with which it shares the same approach. This approach is justified when probability density functions are unknown. The methid assumes that only second-order properties of processes at play are known, through covariance matrices. Equations to solve are an eigenvalues one of which the matrix is unknown but depends on the signal-to-noise ratio, term to maximize written like a ratio of sums of quadrilatic forms : an algorithm is proposed, which is proved to converge to he good solution. Performances are quantified and mathods are compared via ROC curves
Makni, Aida. "Fusion de données inertielles et magnétiques pour l’estimation de l’attitude sous contrainte énergétique d’un corps rigide accéléré." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT025/document.
Full textIn this PhD. thesis we deal with attitude estimation of accelerated rigid body moving in the 3D space using quaternion parameterization. This problem has been widely studied in the literature in various application areas. The main objective of the thesis is to propose new methods for data fusion to combine inertial gyros) and magnetic measurements. The first challenge concerns the attitude estimation during dynamic cases, in which external acceleration of the body is not negligible compared to the Gravity. Two main approaches are proposed in this context. Firstly, a quatenion-based adaptive Kalman filter (q-AKF) was designed in order to compensate for such external acceleration. Precisely, a smart detector is designed to decide whether the body is in static or dynamic case. Then, the covariance matrix of the external acceleration is estimated to tune the filter gain. Second, we developed descriptor filter based on a new formulation of the dynamic model where the process model is fed by accelerometer measurements while observation model is fed by gyros and magnetometer measurements. Such modeling gives rise to a descriptor system. The resulting model allows taking the external acceleration of the body into account in a very efficient way. The second challenge is related to the energy consumption issue of gyroscope, considered as the most power consuming sensor. We study the way to reduce the gyro measurements acquisition by switching on/off the sensor while maintaining an acceptable attitude estimation. The effciency of the proposed methods is evaluated by means of numerical simulations and experimental tests
Giguelay, Jade. "Estimation des moindres carrés d'une densité discrète sous contrainte de k-monotonie et bornes de risque. Application à l'estimation du nombre d'espèces dans une population." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS248/document.
Full textThis thesis belongs to the field of nonparametric density estimation under shape constraint. The densities are discrete and the form is k-monotonicity, k>1, which is a generalization of convexity. The integer k is an indicator for the hollow's degree of a convex function. This thesis is composed of three parts, an introduction, a conclusion and an appendix.Introduction :The introduction is structured in three chapters. First Chapter is a state of the art of the topic of density estimation under shape constraint. The second chapter of the introduction is a synthesis of the thesis, available in French and in English. Finally Chapter 3 is a short chapter which summarizes the notations and the classical mathematical results used in the manuscript.Part I : Estimation of a discrete distribution under k-monotonicityconstraintTwo least-square estimators of a discrete distribution p* under constraint of k-monotonicity are proposed. Their characterisation is based on the decomposition on a spline basis of k-monotone sequences, and on the properties of their primitives. Their statistical properties are studied, and in particular their quality of estimation is measured in terms of the quadratic error. They are proved to converge at the parametric rate. An algorithm derived from the support reduction algorithm is implemented in the R-package pkmon. A simulation study illustrates the properties of the estimators. This piece of works, which constitutes Part I of the manuscript, has been published in ElectronicJournal of Statistics (Giguelay, 2017).Part II : Calculation of risks boundsIn the first chapter of Part II, a methodology for calculating riskbounds of the least-square estimator is given. These bounds are adaptive in that they depend on a compromise between the distance of p* on the frontier of the set of k-monotone densities with finite support, and the complexity (linked to the spline decomposition) of densities belonging to this set that are closed to p*. The methodology based on the variational formula of the risk proposed by Chatterjee (2014) is generalized to the framework of discrete k-monotone densities. Then the bracketting entropies of the relevant functionnal space are calculating, leading to control the empirical process involved in the quadratic risk. Optimality of the risk bound is discussed in comparaison with the results previously obtained in the continuous case and for the gaussian regression framework. In the second chapter of Part II, several results concerningbracketting entropies of spaces of k-monotone sequences are presented.Part III : Estimating the number of species in a population and tests of k-monotonicityThe last part deals with the problem of estimating the number ofpresent species in a given area at a given time, based on theabundances of species that have been observed. A definition of ak-monotone abundance distribution is proposed. It allows to relatethe probability of observing zero species to the truncated abundancedistribution. Two approaches are proposed. The first one is based on the Least-Squares estimator under constraint of k-monotonicity, the second oneis based on the empirical distribution. Both estimators are comparedusing a simulation study. Because the estimator of the number ofspecies depends on the value of the degree of monotonicity k, we proposea procedure for choosing this parameter, based on nested testingprocedures. The asymptotic levels and power of the testing procedureare calculated, and the behaviour of the method in practical cases isassessed on the basis of a simulation study
Bérard, Thomas. "Estimation du champ de contrainte dans le massif granitique de Soultz-sous-Forêts : implication sur la rhéologie de la croûte fragile." Paris, Institut de physique du globe, 2003. http://www.theses.fr/2003GLOB0005.
Full textSimon, Antoine. "Optimisation énergétique de chaînes de traction hybrides essence et Diesel sous contrainte de polluants : Étude et validation expérimentale." Thesis, Orléans, 2018. http://www.theses.fr/2018ORLE2010.
Full textPowertrain hybridization is a solution that has been adopted in order to conform to future standards for emissions regulations. The supervisory strategy of the hybrid powertrain divides the power emitted between the internal combustion engine and the electric machine. In past studies, this strategy has typically responded to an optimization problem with the objective of reducing consumption. However, in addition to this, it is now necessary to take pollutant emissions into account as well. The after-treatment system, placed in the exhaust of the engine, is able to reduce pollutants emitted into the atmosphere. It is efficient from a certain temperature threshold, and the temperature of the system is dependent on the heat brought by the exhaust gas of the engine. The first part of this dissertation is aimed at modelling the energy consumption and pollutant emissions of the hybrid powertrain. The efficiency model of the after-treatment system is adapted for use in two different contexts. The zero-dimensional model conforms to the constraints of the optimal control calculation. The one-dimensional model associated with a state estimator can be embedded in a vehicle and calculated in real time. From this work, the second part of this dissertation deduces supervisory strategies from the optimal control theory. On the one hand, Bellman’s principle is used to calculate the optimal control of a Diesel hybrid vehicle using different supervisory criteria, each having more or less information about the after-treatment system efficiency over NOX emissions. On the other hand, a strategy from Pontryagin’s minimum principle, embedded in a gasoline hybrid vehicle, running in real time and calibrated with two parameters, is proposed. The whole of this work is validated experimentally on an engine test bed and shows a significant reduction in pollutant emissions for a slight fuel consumption penalty
Mazuyer, Antoine. "Estimation de l'état de contrainte initial in situ dans les réservoirs par approche inverse." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0079/document.
Full textInitial stress state is the stress state before any human activity. Its knowledge is essential when dealing with scientific (understanding of plate tectonics), preventive (earthquake prediction) and industrial (understanding reservoirs before their exploitation) purposes. We present a method to estimate the initial stress state in a 3D domain from sparse data. This method relies on an inverse approach which uses the finite elements method to solve the elastic mechanical problem. The model parameters are Neumann conditions, which are defined as piecewise linear functions. The data parameters are stress state observations, such as intensity and orientation at a few points. An ensemble optimization method is used to solve the inverse problem. The method is tested on a synthetic case where the reference solution is known. On this example, the method succeeds in retrieving the stress state at data points as well as in the whole domain. The method is enriched with a mechanical criterion which imposes mechanical constraints in the domain under study. The method is then applied to a real case: the Neuquèn basin in Argentina where borehole stress data is available. This application reveals some of the limits of the presented method. Then, the effect of faults on the stress state is investigated. Different modeling strategies are presented: the objective is to reduce the computing cost, which can be very high when dealing with such complex structures. We propose to model them using only elastic properties. Finally, we present the integrative software which were developed to run mechanical simulations. RINGMesh handles the structural model data structure and RINGMecha runs the mechanical simulations on the model. RINGMecha is interfaced with several simulators. Each of them can be called separately, depending on the problem to be solved. The interface of RINGMecha with third party simulators is done in a user friendly manner. RINGMecha was used for all the computations presented in this thesis. It was built in order to be extended to other problems, with other simulators
Mekhnacha, Kamel. "Méthodes Probabilistes Bayesiennes pour la prise en en compte des incertitudes géométriques : Application à la CAO-Robotique." Phd thesis, Grenoble INPG, 1999. http://tel.archives-ouvertes.fr/tel-00010472.
Full textRighi, Ali. "Sur l'estimation de densités prédictives et l'estimation d'un coût." Rouen, 2011. http://www.theses.fr/2011ROUES002.
Full textThis thesis is divided in two parts. In the first part, we investigate predictive density estimation for a multivariate Gaussian model under the Kullback-Leibler loss. We focus on the link with the problem of estimation of the mean under quadratic loss. We obtain several parallel results. We prove minimaxity and improved estimation results under restriction for the unknown mean. In particular, we show, via two different paths, that the Bayesian predictive density associated to the uniform prior on a convex C dominates the best invariant predictive density when μ 2 C. This is a parallel result to Hartigan’s result in 2004, for the estimation of the mean under quadratic loss. At the end of this part, we give numerical simulations to visualize the gain obtained by some of our new proposed estimators. In the second part, for the Gaussian model of dimension p, we treat the problem of estimating the loss of the standard estimator of the mean (that is, #0(X) = X). We give generalized Bayes estimators which dominate the unbiased estimator of loss (that is, #0(X) = p), through sufficient conditions for p # 5. Examples illustrate the theory. Then we carry on a technical study and numerical simulations on the gain reached by one of our proposed minimax generalized Bayes estimators of loss
Blagouchine, Iaroslav. "Modélisation et analyse de la parole : Contrôle d’un robot parlant via un modèle interne optimal basé sur les réseaux de neurones artificiels. Outils statistiques en analyse de la parole." Thesis, Aix-Marseille 2, 2010. http://www.theses.fr/2010AIX26666.
Full textThis Ph.D. dissertation deals with speech modeling and processing, which both share the speech quality aspect. An optimum internal model with constraints is proposed and discussed for the control of a biomechanical speech robot based on the equilibrium point hypothesis (EPH, lambda-model). It is supposed that the robot internal space is composed of the motor commands lambda of the equilibrium point hypothesis. The main idea of the work is that the robot movements, and in particular the robot speech production, are carried out in such a way that, the length of the path, traveled in the internal space, is minimized under acoustical and mechanical constraints. Mathematical aspect of the problem leads to one of the problems of variational calculus, the so-called geodesic problem, whose exact analytical solution is quite complicated. By using some empirical findings, an approximate solution for the proposed optimum internal model is then developed and implemented. It gives interesting and challenging results, and shows that the proposed internal model is quite realistic; namely, some similarities are found between the robot speech and the real one. Next, by aiming to analyze speech signals, several methods of statistical speech signal processing are developed. They are based on higher-order statistics (namely, on normalized central moments and on the fourth-order cumulant), as well as on the discrete normalized entropy. In this framework, we also designed an unbiased and efficient estimator of the fourth-order cumulant in both batch and adaptive versions
Barbié, Laureline. "Raffinement de maillage multi-grille local en vue de la simulation 3D du combustible nucléaire des Réacteurs à Eau sous Pression." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4742.
Full textThe aim of this study is to improve the performances, in terms of memory space and computational time, of the current modelling of the Pellet-Cladding mechanical Interaction (PCI),complex phenomenon which may occurs during high power rises in pressurised water reactors. Among the mesh refinement methods - methods dedicated to efficiently treat local singularities - a local multi-grid approach was selected because it enables the use of a black-box solver while dealing few degrees of freedom at each level. The Local Defect Correction (LDC) method, well suited to a finite element discretisation, was first analysed and checked in linear elasticity, on configurations resulting from the PCI, since its use in solid mechanics is little widespread. Various strategies concerning the implementation of the multilevel algorithm were also compared. Coupling the LDC method with the Zienkiewicz-Zhu a posteriori error estimator in orderto automatically detect the zones to be refined, was then tested. Performances obtained on two-dimensional and three-dimensional cases are very satisfactory, since the algorithm proposed is more efficient than h-adaptive refinement methods. Lastly, the LDC algorithm was extended to nonlinear mechanics. Space/time refinement as well as transmission of the initial conditions during the remeshing step were looked at. The first results obtained are encouraging and show the interest of using the LDC method for PCI modelling
Liu, Hao. "Stratégie de raffinement automatique de maillage et méthodes multi-grilles locales pour le contact : application à l'interaction mécanique pastille-gaine." Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM4720/document.
Full textThis Ph.D. work takes place within the framework of studies on Pellet-Cladding mechanical Interaction (PCI) which occurs in the fuel rods of pressurized water reactor. This manuscript focuses on automatic mesh refinement to simulate more accurately this phenomena while maintaining acceptable computational time and memory space for industrial calculations. An automatic mesh refinement strategy based on the combination of the Local Defect Correction multigrid method (LDC) with the Zienkiewicz and Zhu a posteriori error estimator is proposed. The estimated error is used to detect the zones to be refined, where the local subgrids of the LDC method are generated. Several stopping criteria are studied to end the refinement process when the solution is accurate enough or when the refinement does not improve the global solution accuracy anymore.Numerical results for elastic 2D test cases with pressure discontinuity shows the efficiency of the proposed strategy.The automatic mesh refinement in case of unilateral contact problems is then considered. The strategy previously introduced can be easily adapted to the multibody refinement by estimating solution error on each body separately. Post-processing is often necessary to ensure the conformity of the refined areas regarding the contact boundaries. A variety of numerical experiments with elastic contact (with or without friction, with or without an initial gap) confirms the efficiency and adaptability of the proposed strategy