Tesis sobre el tema "Problèmes d’estimation de performances"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 20 mejores tesis para su investigación sobre el tema "Problèmes d’estimation de performances".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Barré, Mathieu. "Worst-case analysis of efficient first-order methods". Electronic Thesis or Diss., Université Paris sciences et lettres, 2021. http://www.theses.fr/2021UPSLE064.
Texto completoMany modern applications rely on solving optimization problems (e.g., computational biology, mechanics, finance), establishing optimization methods as crucial tools in many scientific fields. Providing guarantees on the (hopefully good) behaviors of these methods is therefore of significant interest. A standard way of analyzing optimization algorithms consists in worst-case reasoning. That is, providing guarantees on the behavior of an algorithm (e.g. its convergence speed), that are independent of the function on which the algorithm is applied and true for every function in a particular class. This thesis aims at providing worst-case analyses of a few efficient first-order optimization methods. We start by the study of Anderson acceleration methods, for which we provide new explicit worst-case bounds guaranteeing precisely when acceleration occurs. We obtained these guarantees by providing upper bounds on a variation of the classical Chebyshev optimization problem on polynomials, that we believe of independent interest. Then, we extend the Performance Estimation Problem (PEP) framework, that was originally designed for principled analyses of fixed-step algorithms, to study first-order methods with adaptive parameters. This is illustrated in particular through the worst-case analyses of the canonical gradient method with Polyak step sizes that use gradient norms and function values information, and of an accelerated version of it. The approach is also presented on other standard adaptive algorithms. Finally, the last contribution of this thesis is to further develop the PEP methodology for analyzing first-order methods relying on inexact proximal computations. Using this framework, we produce algorithms with optimized worst-case guarantees and provide (numerical and analytical) worst-case bounds for some standard algorithms in the literature
Gasparyan, Samvel. "Deux problèmes d’estimation statistique pour les processus stochastiques". Thesis, Le Mans, 2016. http://www.theses.fr/2016LEMA1031/document.
Texto completoThis work is devoted to the questions of the statistics of stochastic processes. Particularly, the first chapter is devoted to a non-parametric estimation problem for an inhomogeneous Poisson process. The estimation problem is non-parametric due to the fact that we estimate the mean function. We start with the definition of the asymptotic efficiency in non-parametric estimation problems and continue with examination of the existence of asymptotically efficient estimators. We consider a class of kernel-type estimators. In the thesis we prove that under some conditions on the coefficients of the kernel with respect to a trigonometric basis we have asymptotic efficiency in minimax sense over various sets. The obtained results highlight the phenomenon that imposing regularity conditions on the unknown function, we can widen the class ofasymptotically efficient estimators. To compare these (first order) efficient estimators, we prove an inequality which allows us to find an estimator which is asymptotically efficient of second order. We calculate also the rate of convergence of this estimator, which depends on the regularity of the unknown function, and finally the minimal value of the asymptotic variance for this estimator is calculated. This value plays the same role in the second order estimation as the Pinsker constant in the density estimation problem or the Fisher information in parametric estimation problems. The second chapter is dedicated to a problem of estimation of the solution of a Backward Stochastic Differential Equation (BSDE). We observe a diffusion process which is given by its stochastic differential equation with the diffusion coefficientdepending on an unknown parameter. The observations are discrete. To estimate the solution of a BSDE, we need an estimator-process for a parameter, which, for each given time, uses only the available part of observations. In the literature there exists a method of construction, which minimizes a functional. We could not use this estimator, because the calculations would not be feasible. We propose an estimator-process which has a simple form and can be easily computed. Using this estimator we estimate the solution of a BSDE in an asymptotically efficient way
Lacot, Émilie. "L'évaluation psychotechnique des performances mnésiques : problèmes épistémologiques et méthodologiques". Thesis, Toulouse 2, 2014. http://www.theses.fr/2014TOU20099.
Texto completoThe use of psycho-technical scores by neuropsychology researchers to test research hypotheses and neuropsychologists to perform memory assessments lacks scientific legitimacy if these scores are not measurements. A neuropsychology researcher who wants to decide between two competing theoretical approaches, by means of a case study, rather should consider whether the theories predict the probability of success in test items belonging to disjoint ranges. This implies that the notion of probability of success of an item by the patient is empirically based (absence of any learning). If learning takes place, especially in a patient with brain-injury, the scientific problem is different since it is to discover what make learning possible. The clinician, the practitioner of neuropsychological tests, participates in his turn to a diagnostic institution obliges to make a selection of patients who will benefit from further examination of their brains. In this perspective, the psycho-technical scores feed a socio-technical system whose legitimacy not expected to be merely scientific but also political. Scientism determining the current conditions of test validation not only masks the political aspect of this system, but also prevents researchers to think about what measuring means (the scoring of a performance is not equivalent to measuring a theoretical quantity). The methodology for this reflection is based on (i) a series of standard studies (validation testing, case study) and (ii) a thorough analysis of the notion of testability of research hypotheses used by neuropsychology researchers, and assumptions underlying measurability of a theoretical quantity
Chaumette, Eric. "Contribution à la caractérisation des performances des problèmes conjoints de détection et d'estimation". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2004. http://tel.archives-ouvertes.fr/tel-00132161.
Texto completoChaumette, Éric. "Contribution à la caractérisation des performances des problèmes conjoints de détection et d'estimation". Cachan, Ecole normale supérieure, 2004. http://tel.archives-ouvertes.fr/tel-00132161.
Texto completoA wide variety of actual processing (Radar, Sonar, Telecoms. . . ) requires a detection step, who main effect is to restrict the set of observations available for unknown parameter estimation. Therefore, we address the derivation of Mean Square Error (MSE) lower bounds for determinii parameters conditioned by a binary hypothesis testing problem using a didactic approach of wi scope. To prove mat it is meaningful, we also show, with the help of a fundamental applicatior the problem of lower bound tightness at low SNR may arise from an incorrect lower bound formulation that does not take into account the true nature of the problem under investigation: joint detection-estimation problem
Betencurte, da Silva Wellington. "Aplicação de filtros de partículas para a assimilação de dados em problemas de fronteira móvel". Phd thesis, Toulouse, INPT, 2012. http://oatao.univ-toulouse.fr/11752/1/betencurte.pdf.
Texto completoRazafindralambo, Tahiry. "Performances des couches MAC dans les réseaux sans fil ad hoc : problèmes et solutions". Phd thesis, INSA de Lyon, 2007. http://tel.archives-ouvertes.fr/tel-00532658.
Texto completoVert, Daniel. "Étude des performances des machines à recuit quantique pour la résolution de problèmes combinatoires". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG026.
Texto completoThe main contribution of this thesis is to investigate the behavior of analog quantum computers as commercialized by D-Wave when confronted to instances of the maximum cardinality matching problem which is specifically designed to be hard to solve by means of simulated annealing. We benchmark a D-Wave “Washington” (2X) with 1098 operational qubits on various sizes of such instances and observe that for all but the most trivially small of these it fails to obtain an optimal solution. Thus, our results suggest that quantum annealing, at least as implemented in a D-Wave device, falls in the same pitfalls as simulated annealing and hence provides additional evidences suggesting that there exist polynomial-time problems that such a machine cannot solve efficiently to optimality. Additionally, we investigate the extent to which the qubits interconnection topologies explains these latter experimental results
Norre, Sylvie. "Problèmes de placement de taches sur des architectures multiprocesseurs : méthodes stochastiques et évaluation des performances". Clermont-Ferrand 2, 1993. http://www.theses.fr/1993CLF21511.
Texto completoTouati, Nora. "Amélioration des performances du schéma de la génération de colonnes : application aux problèmes de tournées de véhicules". Paris 13, 2008. http://www.theses.fr/2008PA132032.
Texto completoColumgeneration algorithms are instrumental in many areas of applied optimization where linear programs with an enormous number of variables need to be solued. Although success fully used in many applications, this method suffers from well-known "instability" issues, that somewhat limit its efficiency. This work focuses on accelerating strategies in a column generation algorithm. We propose some diversifiication methods in order to decrease the total number of generated columns and then master problems resolution time. We interest also to solving efficiently the pricing problems, by proposing an improning approch based on reoptimization principle and a new variant of the dynamic programming algorithm. The effectiveness of these approches is validated on vehicule routing problem with time windows
Arres, Billel. "Optimisation des performances dans les entrepôts distribués avec Mapreduce : traitement des problèmes de partionnement et de distribution des données". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE2012.
Texto completoIn this manuscript, we addressed the problems of data partitioning and distribution for large scale data warehouses distributed with MapReduce. First, we address the problem of data distribution. In this case, we propose a strategy to optimize data placement on distributed systems, based on the collocation principle. The objective is to optimize queries performances through the definition of an intentional data distribution schema of data to reduce the amount of data transferred between nodes during treatments, specifically during MapReduce’s shuffling phase. Secondly, we propose a new approach to improve data partitioning and placement in distributed file systems, especially Hadoop-based systems, which is the standard implementation of the MapReduce paradigm. The aim is to overcome the default data partitioning and placement policies which does not take any relational data characteristics into account. Our proposal proceeds according to two steps. Based on queries workload, it defines an efficient partitioning schema. After that, the system defines a data distribution schema that meets the best user’s needs, and this, by collocating data blocks on the same or closest nodes. The objective in this case is to optimize queries execution and parallel processing performances, by improving data access. Our third proposal addresses the problem of the workload dynamicity, since users analytical needs evolve through time. In this case, we propose the use of multi-agents systems (MAS) as an extension of our data partitioning and placement approach. Through autonomy and self-control that characterize MAS, we developed a platform that defines automatically new distribution schemas, as new queries appends to the system, and apply a data rebalancing according to this new schema. This allows offloading the system administrator of the burden of managing load balance, besides improving queries performances by adopting careful data partitioning and placement policies. Finally, to validate our contributions we conduct a set of experiments to evaluate our different approaches proposed in this manuscript. We study the impact of an intentional data partitioning and distribution on data warehouse loading phase, the execution of analytical queries, OLAP cubes construction, as well as load balancing. We also defined a cost model that allowed us to evaluate and validate the partitioning strategy proposed in this work
Guivarch, Ronan. "Résolution parallèle de problèmes aux limites couplés par des méthodes de sous-domaines synchrones et asynchrones". Toulouse, INPT, 1997. http://www.theses.fr/1997INPT044H.
Texto completoMerrad, Walid. "Interfaces tangibles et réalité duale pour la résolution collaborative de problèmes autour de tables interactives distribuées". Thesis, Valenciennes, Université Polytechnique Hauts-de-France, 2020. http://www.theses.fr/2020UPHF0010.
Texto completoIn everyday life, new interactions are gradually replacing the standard computer keyboardand mouse, by using the human body gestures (hands, fingers, head, etc.) as alternativesof interactions on surfaces and in-air. Another type of interaction resides within the manipulationof everyday objects to interact with digital systems. Interactive tabletops haveemerged as new platforms in several domains, offering better usability and facilitatingmulti-user collaboration, thanks to their large display surface and different interactiontechniques on their surfaces, such as multi-touch and tangible. Therefore, improving interaction(s) on these devices and combining it (respectively them) with other conceptscan prove more useful and helpful in the everyday life of users and designers.The topic of this thesis focuses on studying user interactions on tangible interactivetabletops, in a context of use set in a dual reality environment. Tangible User Interfacesoffer users the possibility to apprehend and grasp the meaning of digital information bymanipulating insightful tangible representations in our physical world. These interactionmetaphors are bridging both environments that constitute the dual reality: the physicalworld and the virtual world.In this perspective, this work presents a theoretical contribution along with itsapplications. We propose to combine tangible interaction on tabletops and dual realityin a conceptual framework, basically intended for application designers, that models andexplains interactions and representations, which operate in dual reality setups. First ofall, we expose various works carried out in the field of tangible interaction in general,then we focus on existing work conducted on tabletops. We also propose to list 112interactive tabletops, classified and characterized by several criteria. Next, we presentthe dual reality concept and its possible application domains. Second, we design ourproposal of the framework, illustrate and explain its composing elements, and how itcan adapt to various situations of dual reality, particularly with interactive tabletopsequipped with RFID technology. Finally, and as application contributions, we show casestudies that we designed based on our proposal, which illustrate implementations ofelements from our proposed framework. Research perspectives are finally highlighted atthe end of the manuscript
Simon-Fiacre, Caroline. "Médiations sémiotiques en situation de co-résolution de problèmes : effets sur les stratégies et performances d'enfants de CP et de CE2 dans les tâches d'exploration ou d'encodage d'un parcours". Aix-Marseille 1, 2005. http://www.theses.fr/2005AIX10019.
Texto completoBrogna, Gianluigi. "Probabilistic Bayesian approaches to model the global vibro-acoustic performance of vehicles". Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEI082.
Texto completoIn the automotive domain, although already quite elaborate, the current approaches to predict and analyse the vibro-acoustic behaviour of a vehicle are still far from the complexity of the real system. Among other limitations, design specifications are still essentially based on extreme loading conditions, useful when verifying the mechanical strength, but not representative of the actual vehicle usage, which is instead important when addressing the vibro-acoustic performance. As a consequence, one main aim here is to build a prediction model able to take into account the loading scenarios representative of the actual vehicle usage, as well as the car structural uncertainty (due, for instance, to production dispersion). The proposed model shall cover the low and mid-frequency domain. To this aim, four main steps are proposed in this work: (1) the definition of a model for a general vehicle system, pertinent to the vibro-acoustic responses of interest; (2) the estimation of the whole set of loads applied to this system in a large range of operating conditions; (3) the statistical analysis and modelling of these loads as a function of the vehicle operating conditions; (4) the analysis of the application of the modelled loads to non-parametric stochastic transfer functions, representative of the vehicle structural uncertainty. To achieve the previous steps, ad hoc Bayesian algorithms have been developed and applied to a large industrial database. The Bayesian framework is considered here particularly valuable since it allows taking into account prior knowledge, namely from automotive experts, and since it easily enables uncertainty propagation between the layers of the probabilistic model. Finally, this work shows that the proposed algorithms, more than simply yielding a model of the vibro-acoustic response of a vehicle, are also useful to gain deep insights on the dominant physical mechanisms at the origin of the response of interest
Alsayasneh, Maha. "On the identification of performance bottlenecks in multi-tier distributed systems". Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM009.
Texto completoToday's distributed systems are made of various software componentswith complex interactions and a large number of configurationsettings. Pinpointing the performance bottlenecks is generally a non-trivial task, which requires human expertise as well as trial anderror. Moreover, the same software stack may exhibit very differentbottlenecks depending on factors such as the underlying hardware, theapplication logic, the configuration settings, and the operatingconditions. This work aims to (i) investigate whether it is possibleto identify a set of key metrics that can be used as reliable andgeneral indicators of performance bottlenecks, (ii) identify thecharacteristics of these indicators, and (iii) build a tool that canautomatically and accurately determine if the system reaches itsmaximum capacity in terms of throughput.In this thesis, we present three contributions. First, we present ananalytical study of a large number of realistic configuration setupsof multi-tier distributed applications, more specifically focusing ondata processing pipelines. By analyzing a large number of metrics atthe hardware and at the software level, we identify the ones thatexhibit changes in their behavior at the point where the systemreaches its maximum capacity. We consider these metrics as reliableindicators of performance bottlenecks. Second, we leverage machinelearning techniques to build a tool that can automatically identifyperformance bottlenecks in the data processing pipeline. We considerdifferent machine learning methods, different selections of metrics,and different cases of generalization to new setups. Third, to assessthe validity of the results obtained considering the data processingpipeline for both the analytical and the learning-based approaches,the two approaches are applied to the case of a Web stack.From our research, we draw several conclusions. First, it is possibleto identify key metrics that act as reliable indicators of performancebottlenecks for a multi-tier distributed system. More precisely,identifying when the server has reached its maximum capacity can beidentified based on these reliable metrics. Contrary to the approachadopted by many existing works, our results show that a combination ofmetrics of different types is required to ensure reliableidentification of performance bottlenecks in a large number ofsetups. We also show that approaches based on machine learningtechniques to analyze metrics can identify performance bottlenecks ina multi-tier distributed system. The comparison of different modelsshows that the ones based on the reliable metrics identified by ouranalytical study are the ones that achieve the bestaccuracy. Furthermore, our extensive analysis shows the robustness ofthe obtained models that can generalize to new setups, to new numbersof clients, and to both new setups and new numbers ofclients. Extending the analysis to a Web stack confirmsthe main findings obtained through the study of the data processingpipeline. These results pave the way towards a general and accuratetool to identify performance bottlenecks in distributed systems
Bacharach, Lucien. "Caractérisation des limites fondamentales de l'erreur quadratique moyenne pour l'estimation de signaux comportant des points de rupture". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS322/document.
Texto completoThis thesis deals with the study of estimators' performance in signal processing. The focus is the analysis of the lower bounds on the Mean Square Error (MSE) for abrupt change-point estimation. Such tools will help to characterize performance of maximum likelihood estimator in the frequentist context but also maximum a posteriori and conditional mean estimators in the Bayesian context. The main difficulty comes from the fact that, when dealing with sampled signals, the parameters of interest (i.e., the change points) lie on a discrete space. Consequently, the classical large sample theory results (e.g., asymptotic normality of the maximum likelihood estimator) or the Cramér-Rao bound do not apply. Some results concerning the asymptotic distribution of the maximum likelihood only are available in the mathematics literature but are currently of limited interest for practical signal processing problems. When the MSE of estimators is chosen as performance criterion, an important amount of work has been provided concerning lower bounds on the MSE in the last years. Then, several studies have proposed new inequalities leading to tighter lower bounds in comparison with the Cramér-Rao bound. These new lower bounds have less regularity conditions and are able to handle estimators’ MSE behavior in both asymptotic and non-asymptotic areas. The goal of this thesis is to complete previous results on lower bounds in the asymptotic area (i.e. when the number of samples and/or the signal-to-noise ratio is high) for change-point estimation but, also, to provide an analysis in the non-asymptotic region. The tools used here will be the lower bounds of the Weiss-Weinstein family which are already known in signal processing to outperform the Cramér-Rao bound for applications such as spectral analysis or array processing. A closed-form expression of this family is provided for a single and multiple change points and some extensions are given when the parameters of the distributions on each segment are unknown. An analysis in terms of robustness with respect to the prior influence on our models is also provided. Finally, we apply our results to specific problems such as: Gaussian data, Poisson data and exponentially distributed data
Brossier, Romain. "Imagerie sismique à deux dimensions des milieux visco-élastiques par inversion des formes d'ondes : développements méthodologiques et applications". Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00451138.
Texto completoSimard, Marie-Noelle. "Identification précoce des nouveau-nés qui auront des problèmes de développement à deux ans: utilité de l'Évaluation neurologique d'Amiel-Tison". Thèse, 2009. http://hdl.handle.net/1866/3837.
Texto completoDespite the progress in medicine, early prediction of neurodevelopmental outcome for infants born preterm still remains a challenge. Infants born preterm are at risk of mild to severe disabilities such as cerebral palsy, mental retardation, sensory impairments or learning disabilities. To reduce the functional impact of those disabilities, identification of valid early markers of neurodevelopmental disabilities becomes important. As financial and human resources are limited, only those infants with a gestational age (GA) <29 weeks and a birth weight (BW) <1250 are systematically followed, leaving 95% of the preterm population without surveillance. The identification of early markers would allow targeting infants born after 28 weeks of GA the more at risk. The main objective of the present work was to assess the use of the Amiel-Tison Neurological Assessments (ATNA) in the identification and the follow-up of infants with a GA between 29 and 37 weeks who will present neurodevelopmental problems at two years of corrected age (CA). Specifically, the inter-examiner reliability, the stability during the first two years of life and the predictive validity of the ATNA were assessed. The cohort was composed of 173 children born between 290/7 and 370/7 weeks of GA, with a BW<1250g and who stayed at least 24 hours in the neonatal intensive care unit at the CHU Sainte-Justine. The children were assessed with the ATNA at term age and at 4, 8, 12 and 24 months CA. At 24 months CA, their development was assessed with the Bayley Scales of Infant Development–II. The results revealed excellent inter-examiners reliability and good stability during the first two years of age of the neurological status and signs. Significant differences according to the neurological status at term age were observed at 24 months CA for developmental performance. Moreover, this status was one of the main predictive variables of developmental performance at two years CA. Results incite the integration of the neurological status as assessed by the ATNA as an early marker for the surveillance of infants the more at risk.
Bonneville-Hébert, Ariane. "Analyse de la fertilité des vaches laitières Holstein «Repeat Breeder»". Thèse, 2009. http://hdl.handle.net/1866/3956.
Texto completoTwo factors underlie the Repeat Breeder (RB) concerns in Quebec: its incidence and economic impact. Currently RB incidence in Quebec is of ± 25% (yearly Report, June 2008, www.dsahr.ca). Monetary losses related to the RB are the result of veterinary expenses and insemination, loss of productivity and the involuntary culling. In order to have a better knowledge of this syndrome, one must understand the general risk factors involved and then explore the individual condition of these problem cows. The goal of the first part of the project was to assess the impact of the postpartum reproductive problems and the effect of the lactation number as risk factors of the Repeat Breeder cow. A computerized data bank listing 418 383 lactations was analyzed. The analysis established dystocia as being the condition with the most consequences on future fertility. Other risk factors namely the number of lactations influence the reproductive prognosis as well. The second part of the research was to explore the individual condition of the RB using clinical tools. A cohort study was conducted on Holstein cows at the end of the voluntary waiting period on day 7 of the oestrous cycle. The clinical tests studied were vaginoscopy, trans-rectal examination, ultrasonography of the reproductive system, presence of leukocyte esterase, bacteriology and biochemistry of uterine fluid, endometrial cytology and serum progesterone profile. The results of these clinical tests reveal that the bacteriological analysis of uterine fluid is indicative of future reproductive status.