Dissertations / Theses on the topic 'K-statistiques'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 25 dissertations / theses for your research on the topic 'K-statistiques.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Imlahi, Abdelouahid. "Approximations fortes des valeurs et temps de records bâtis sur les k-ièmes statistiques d'ordre extrêmes." Paris 6, 1990. http://www.theses.fr/1990PA066175.
Debreuve, Eric. "Mesures de similarité statistiques et estimateurs par k plus proches voisins : une association pour gérer des descripteurs de haute dimension en traitement d'images et de vidéos." Habilitation à diriger des recherches, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00457710.
Fournier, Alexandre. "Détection et classification de changements sur des zones urbaines en télédétection." Toulouse, ISAE, 2008. https://tel.archives-ouvertes.fr/tel-00463593.
Ould, Aboubecrine Mohamed Mahmoud. "Sur l'estimation basée sur les records et la caractérisation des populations." Le Havre, 2011. http://www.theses.fr/2011LEHA0004.
In the first part of this work, we consider a number of k-record values from independent and identically distributed random variables with a continuous distribution function F, ou aim is to predict future k-record values under suitable assumptions on the tail of F. In the second part, we consider finite populations and investigate their characterization by regressions of order statistics under sampling without replacement. We also give some asymptotic results when the size of the population goes to infinity
Apostol, Costin. "Apports bioinformatiques et statistiques à l'identification d'inhibiteurs du récepteur MET." Thesis, Lille 2, 2010. http://www.theses.fr/2010LIL2S053.
The effect of polysaccharides on HGF-MET interaction was studied using an experimental design with several microarrays under different experimental conditions. The purpose of the analysis is the selection of the best polysaccharides, inhibitors of HGF-MET interaction. From a statistical point of view this is a classification problem. Statistical and computer processing of the obtained microarrays requires the implementation of the PASE platform with statistical analysis plug-ins for this type of data. The main feature of these statistical data is the repeated measurements: the experiment was repeated on 5 microarrays and all studied polysaccharides are replicated 3 times on each microarray. We are no longer in the classical case of globally independent data, we only have independence at inter-subjects and intra-subject levels. We propose mixed models for data normalization and representation of subjects by the empirical cumulative distribution function. The use of the Kolmogorov-Smirnov statistic appears natural in this context and we study its behavior in the classification algorithms like hierarchical classification and k-means. The choice of the number of clusters and the number of repetitions needed for a robust classification are discussed in detail. The robustness of this methodology is measured by simulations and applied to HGF-MET data. The results helped the biologists and chemists from the Institute of Biology of Lille to choose the best polysaccharides in tests conducted by them. Some of these results also confirmed the intuition of the researchers. The R scripts implementing this methodology are integrated into the platform PASE. The use of functional data analysis on such data is part of the immediate future work
Lucet, Corinne. "Méthode de décomposition pour l'évaluation de la fiabilité des réseaux." Compiègne, 1993. http://www.theses.fr/1993COMPD653.
Vincent, Garcia. "Suivi d'objets d'intérêt dans une séquence d'images : des points saillants aux mesures statistiques." Phd thesis, Université de Nice Sophia-Antipolis, 2008. http://tel.archives-ouvertes.fr/tel-00374657.
La première méthode repose sur l'analyse de trajectoires temporelles de points saillants et réalise un suivi de régions d'intérêt. Des points saillants (typiquement des lieux de forte courbure des lignes isointensité) sont détectés dans toutes les images de la séquence. Les trajectoires sont construites en liant les points des images successives dont les voisinages sont cohérents. Notre contribution réside premièrement dans l'analyse des trajectoires sur un groupe d'images, ce qui améliore la qualité d'estimation du mouvement. De plus, nous utilisons une pondération spatio-temporelle pour chaque trajectoire qui permet d'ajouter une contrainte temporelle sur le mouvement tout en prenant en compte les déformations géométriques locales de l'objet ignorées par un modèle de mouvement global.
La seconde méthode réalise une segmentation spatio-temporelle. Elle repose sur l'estimation du mouvement du contour de l'objet en s'appuyant sur l'information contenue dans une couronne qui s'étend de part et d'autre de ce contour. Cette couronne nous renseigne sur le contraste entre le fond et l'objet dans un contexte local. C'est là notre première contribution. De plus, la mise en correspondance par une mesure de similarité statistique, à savoir l'entropie du résiduel, d'une portion de la couronne et d'une zone de l'image suivante dans la séquence permet d'améliorer le suivi tout en facilitant le choix de la taille optimale de la couronne.
Enfin, nous proposons une implémentation rapide d'une méthode de suivi de régions d'intérêt existante. Cette méthode repose sur l'utilisation d'une mesure de similarité statistique : la divergence de Kullback-Leibler. Cette divergence peut être estimée dans un espace de haute dimension à l'aide de multiples calculs de distances au k-ème plus proche voisin dans cet espace. Ces calculs étant très coûteux, nous proposons une implémentation parallèle sur GPU (grâce à l'interface logiciel CUDA de NVIDIA) de la recherche exhaustive des k plus proches voisins. Nous montrons que cette implémentation permet d'accélérer le suivi des objets, jusqu'à un facteur 15 par rapport à une implémentation de cette recherche nécessitant au préalable une structuration des données.
Levrard, Clément. "Quantification vectorielle en grande dimension : vitesses de convergence et sélection de variables." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112214/document.
The distortion of the quantizer built from a n-sample of a probability distribution over a vector space with the famous k-means algorithm is firstly studied in this thesis report. To be more precise, this report aims to give oracle inequalities on the difference between the distortion of the k-means quantizer and the minimum distortion achievable by a k-point quantizer, where the influence of the natural parameters of the quantization issue should be precisely described. For instance, some natural parameters are the distribution support, the size k of the quantizer set of images, the dimension of the underlying Euclidean space, and the sample size n. After a brief summary of the previous works on this topic, an equivalence between the conditions previously stated for the excess distortion to decrease fast with respect to the sample size and a technical condition is stated, in the continuous density case. Interestingly, this condition looks like a technical condition required in statistical learning to achieve fast rates of convergence. Then, it is proved that the excess distortion achieves a fast convergence rate of 1/n in expectation, provided that this technical condition is satisfied. Next, a so-called margin condition is introduced, which is easier to understand, and it is established that this margin condition implies the technical condition mentioned above. Some examples of distributions satisfying this margin condition are exposed, such as the Gaussian mixtures, which are classical distributions in the clustering framework. Then, provided that this margin condition is satisfied, an oracle inequality on the excess distortion of the k-means quantizer is given. This convergence result shows that the excess distortion decreases with a rate 1/n and depends on natural geometric properties of the probability distribution with respect to the size of the set of images k. Suprisingly the dimension of the underlying Euclidean space seems to play no role in the convergence rate of the distortion. Following the latter point, the results are directly extended to the case where the underlying space is a Hilbert space, which is the adapted framework when dealing with curve quantization. However, high-dimensional quantization often needs in practical a dimension reduction step, before proceeding to a quantization algorithm. This motivates the following study of a variable selection procedure adapted to the quantization issue. To be more precise, a Lasso type procedure adapted to the quantization framework is studied. The Lasso type penalty applies to the set of image points of the quantizer, in order to obtain sparse image points. The outcome of this procedure is called the Lasso k-means quantizer, and some theoretical results on this quantizer are established, under the margin condition introduced above. First it is proved that the image points of such a quantizer are close to the image points of a sparse quantizer, achieving a kind of tradeoff between excess distortion and size of the support of image points. Then an oracle inequality on the excess distortion of the Lasso k-means quantizer is given, providing a convergence rate of 1/n^(1/2) in expectation. Moreover, the dependency of this convergence rate on different other parameters is precisely described. These theoretical predictions are illustrated with numerical experimentations, showing that the Lasso k-means procedure mainly behaves as expected. However, the numerical experimentations also shed light on some drawbacks concerning the practical implementation of such an algorithm
Makdeche, Saïd. "Grandes déviations de la k (n)ième statistique d'ordre cas de la loi uniforme /." Grenoble 2 : ANRT, 1988. http://catalogue.bnf.fr/ark:/12148/cb376155454.
Montagu, Thierry. "Transformées stabilisatrices de variance pour l'estimation de l'intensité du Shot Noise : application à l'estimation du flux neutronique." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ5015.
The Shot noise is a random process that can be used to accurately model the numberof occurrences of physical particles impinging their associated detectors ; this numberis referred to as the intensity of the process. When this number is small, it is possible to individualize the recorded events whose arrival times are modelled thanks to the Poisson process. In the opposite case, the events are no longer discernible (they ”pile up”), but Campbell's theorem - which establishes the cumulants of the Shot Noise - still allows to estimate the intensity of the process. The estimation of the two first cumulants is classically achevied with the empirical mean and the empirical variance. It is noted in this work, that the variances of theses two estimators and their corresponding estimators of the Shot Noise intensity are functions of their respective means. This property ofheter heteroscedasticity being observed both in theory and practice, an approach by variance stabilizing transforms is proposed using the "Delta method". These are calculated as well as their bias, and their corresponding inverse transforms. Their asymptotic properties are verified thanks to numerical simulations. In the applicative context of neutron flux measurements, which rely on the estimation of the first two cumulants of the Shot Noise and which also have the purpose of estimating the intensity of this random process,variance stabilizing transforms are specifically established as well as their biases and their inverse transforms. They are finally combined with an adaptive Kalman filter in order to denoise the neutron flux measurements. Numerical simulations are carried out to assessfiltering performances. Denoising of real signals is also performed
Giguelay, Jade. "Estimation des moindres carrés d'une densité discrète sous contrainte de k-monotonie et bornes de risque. Application à l'estimation du nombre d'espèces dans une population." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS248/document.
This thesis belongs to the field of nonparametric density estimation under shape constraint. The densities are discrete and the form is k-monotonicity, k>1, which is a generalization of convexity. The integer k is an indicator for the hollow's degree of a convex function. This thesis is composed of three parts, an introduction, a conclusion and an appendix.Introduction :The introduction is structured in three chapters. First Chapter is a state of the art of the topic of density estimation under shape constraint. The second chapter of the introduction is a synthesis of the thesis, available in French and in English. Finally Chapter 3 is a short chapter which summarizes the notations and the classical mathematical results used in the manuscript.Part I : Estimation of a discrete distribution under k-monotonicityconstraintTwo least-square estimators of a discrete distribution p* under constraint of k-monotonicity are proposed. Their characterisation is based on the decomposition on a spline basis of k-monotone sequences, and on the properties of their primitives. Their statistical properties are studied, and in particular their quality of estimation is measured in terms of the quadratic error. They are proved to converge at the parametric rate. An algorithm derived from the support reduction algorithm is implemented in the R-package pkmon. A simulation study illustrates the properties of the estimators. This piece of works, which constitutes Part I of the manuscript, has been published in ElectronicJournal of Statistics (Giguelay, 2017).Part II : Calculation of risks boundsIn the first chapter of Part II, a methodology for calculating riskbounds of the least-square estimator is given. These bounds are adaptive in that they depend on a compromise between the distance of p* on the frontier of the set of k-monotone densities with finite support, and the complexity (linked to the spline decomposition) of densities belonging to this set that are closed to p*. The methodology based on the variational formula of the risk proposed by Chatterjee (2014) is generalized to the framework of discrete k-monotone densities. Then the bracketting entropies of the relevant functionnal space are calculating, leading to control the empirical process involved in the quadratic risk. Optimality of the risk bound is discussed in comparaison with the results previously obtained in the continuous case and for the gaussian regression framework. In the second chapter of Part II, several results concerningbracketting entropies of spaces of k-monotone sequences are presented.Part III : Estimating the number of species in a population and tests of k-monotonicityThe last part deals with the problem of estimating the number ofpresent species in a given area at a given time, based on theabundances of species that have been observed. A definition of ak-monotone abundance distribution is proposed. It allows to relatethe probability of observing zero species to the truncated abundancedistribution. Two approaches are proposed. The first one is based on the Least-Squares estimator under constraint of k-monotonicity, the second oneis based on the empirical distribution. Both estimators are comparedusing a simulation study. Because the estimator of the number ofspecies depends on the value of the degree of monotonicity k, we proposea procedure for choosing this parameter, based on nested testingprocedures. The asymptotic levels and power of the testing procedureare calculated, and the behaviour of the method in practical cases isassessed on the basis of a simulation study
Belhadj, Zied. "Apport de la polarisation multifréquence pour la classification en télédétection radar." Nantes, 1995. http://www.theses.fr/1995NANT2056.
Ahmed, Mohamed Salem. "Contribution à la statistique spatiale et l'analyse de données fonctionnelles." Thesis, Lille 3, 2017. http://www.theses.fr/2017LIL30047/document.
This thesis is about statistical inference for spatial and/or functional data. Indeed, weare interested in estimation of unknown parameters of some models from random or nonrandom(stratified) samples composed of independent or spatially dependent variables.The specificity of the proposed methods lies in the fact that they take into considerationthe considered sample nature (stratified or spatial sample).We begin by studying data valued in a space of infinite dimension or so-called ”functionaldata”. First, we study a functional binary choice model explored in a case-controlor choice-based sample design context. The specificity of this study is that the proposedmethod takes into account the sampling scheme. We describe a conditional likelihoodfunction under the sampling distribution and a reduction of dimension strategy to definea feasible conditional maximum likelihood estimator of the model. Asymptotic propertiesof the proposed estimates as well as their application to simulated and real data are given.Secondly, we explore a functional linear autoregressive spatial model whose particularityis on the functional nature of the explanatory variable and the structure of the spatialdependence. The estimation procedure consists of reducing the infinite dimension of thefunctional variable and maximizing a quasi-likelihood function. We establish the consistencyand asymptotic normality of the estimator. The usefulness of the methodology isillustrated via simulations and an application to some real data.In the second part of the thesis, we address some estimation and prediction problemsof real random spatial variables. We start by generalizing the k-nearest neighbors method,namely k-NN, to predict a spatial process at non-observed locations using some covariates.The specificity of the proposed k-NN predictor lies in the fact that it is flexible and allowsa number of heterogeneity in the covariate. We establish the almost complete convergencewith rates of the spatial predictor whose performance is ensured by an application oversimulated and environmental data. In addition, we generalize the partially linear probitmodel of independent data to the spatial case. We use a linear process for disturbancesallowing various spatial dependencies and propose a semiparametric estimation approachbased on weighted likelihood and generalized method of moments methods. We establishthe consistency and asymptotic distribution of the proposed estimators and investigate thefinite sample performance of the estimators on simulated data. We end by an applicationof spatial binary choice models to identify UADT (Upper aerodigestive tract) cancer riskfactors in the north region of France which displays the highest rates of such cancerincidence and mortality of the country
Caron, Alexandre. "Mesure de la dynamique des polluants gazeux en air intérieur : évaluation des performances de systèmes multi-capteurs." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10161/document.
Nowadays, indoor air quality is a major health issue and a growing research challenge. Many pollutants are presentinside buildings. They are directly emitted by indoor sources such as building materials, furniture, occupants and theiractivities or transferred from outdoors. Due to an increasing concern for energy saving, recent buildings are much moreairtight, reducing the pollutants elimination to the outside. Standard analyzers are not suitable for monitoring the airquality indoors. These techniques are usually bulky, expensive, noisy and require skilled people. An alternative to theseconventional methods recently appeared under the form of microsensors. In this work, the performances and limitationsof different type of sensors such as infrared sensors, electrochemical sensors, photoionisation detectors orsemiconductive sensors for the measurement of CO2, CO, NOx, O3 or VOC, were evaluated in laboratory conditions andalso during measurement campaigns in order to monitor the major indoor air pollutants. Although the response of thesesensors is highly correlated with the concentration measured by reference instruments, their lack of selectivity does notalways allow a quantitative analysis. Naive Bayes classifier and bisecting k-means clustering were used to help analyzethe output of the sensors, and allow identifying typical pollution events, reflecting the dynamics of the indoor air quality
Ben, Arab Taher. "Contribution des familles exponentielles en traitement des images." Phd thesis, Université du Littoral Côte d'Opale, 2014. http://tel.archives-ouvertes.fr/tel-01019983.
Sainton, Grégory. "Spectroscopie des supernovæ à grand décalage vers le rouge." Phd thesis, Université Claude Bernard - Lyon I, 2004. http://tel.archives-ouvertes.fr/tel-00106153.
décalage vers le rouge ("évolution"). Dans le cadre des collaborations Supernova Cosmology Project (SCP) et SuperNova Legacy Survey (SNLS), dont l'objectif scientique commun est l'étude de l'énergie noire à l'aide de supernovæ de type Ia à grand décalage vers le rouge, une part importante du travail de thèse est consacrée à la réduction des données spectrales,
étape nécessaire pour obtenir le spectre physiquement exploitable à partir de données observées. La réduction de l'ensemble des spectres SCP issus du spectrographe à échellettes Keck-ESI a permis d'obtenir des supernovæ de type Ia parmi les plus lointaines jamais observées. Dans l'expérience SNLS, l'identication spectroscopique est essentiellement réalisée avec le spectrographe longue fente FORS1 monté au foyer du VLT UT1. Pour le SNLS, il s'agit de réduire et d'identier une dizaine de spectres par lunaison pendant 5 ans. Dans le cadre de cette thèse, un logiciel d'identication en temps réel de SNIa a été developpé, il permet d'établir le type, le décalage vers le rouge et l'âge du candidat quasi automatiquement. Il évalue aussi la contamination
de la galaxie hôte (dont on peut aussi estimer la morphologie) dans le spectre. Le logiciel a été testé sur un échantillon de spectres analysés en détail.
Par ailleurs, pour certains d'entre eux, on a mesuré la vitesse du CaH&K (3945.12Å) dans la photosphère puis
on a comparé les résultats avec les mêmes mesures réalisées sur un lot de spectres proches. Ce résultat a permis de conrmer l'hypothèse de standardité des SNIa à grand décalage vers le rouge. C'est une hypothèse fondamentale pour mesurer les paramètres cosmologiques avec les supernovæ de type Ia.
Coron, Jean-Luc. "Quelques exemples de jeux à champ moyen." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLED032/document.
The mean field game theory was introduced in 2006 by Jean-Michel Lasry and Pierre-Louis Lions. It allows us to study the game theory in some situations where the number of players is too high to be able to be solved in practice. We will study the mean field game theory on graphs by learning from the studies of Oliver Guéant which we will extend to more generalized forms of Hilbertian. We will also study the links between the K-means and the mean field game theory. In principle, this will offer us new algorithms for solving the K-means thanks to the techniques of numerical resolutions of the mean field games. Findly, we will study a mean field game called the "starting time of a meeting". We will extend it to situations where the players can choose between two meetings. We will study analytically and numerically the existence and multiplicity of the solutions to this problem
Vu, Thi Lan Huong. "Analyse statistique locale de textures browniennes multifractionnaires anisotropes." Thesis, Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0094.
We deal with some anisotropic extensions of the multifractional brownian fields that account for spatial phenomena whose properties of regularity and directionality may both vary in space. Our aim is to set statistical tests to decide whether an observed field of this kind is heterogeneous or not. The statistical methodology relies upon a field analysis by quadratic variations, which are averages of square field increments. Specific to our approach, these variations are computed locally in several directions. We establish an asymptotic result showing a linear gaussian relationship between these variations and parameters related to regularity and directional properties of the model. Using this result, we then design a test procedure based on Fisher statistics of linear gaussian models. Eventually we evaluate this procedure on simulated data. Finally, we design some algorithms for the segmentation of an image into regions of homogeneous textures. The first algorithm is based on a K-means procedure which has estimated parameters as input and takes into account their theoretical probability distributions. The second algorithm is based on an EM algorithm which involves continuous execution ateach 2-process loop (E) and (M). The values found in (E) and (M) at each loop will be used for calculations in the next loop. Eventually, we present an application of these algorithms in the context of a pluridisciplinary project which aims at optimizing the deployment of photo-voltaic panels on the ground. We deal with a preprocessing step of the project which concerns the segmentation of images from the satellite Sentinel-2 into regions where the cloud cover is homogeneous
Ibrahim, Jean-Paul. "Grandes déviations pour des modèles de percolation dirigée & pour des matrices aléatoires." Toulouse 3, 2010. http://thesesups.ups-tlse.fr/1250/.
In this thesis, we study two random models: last-passage percolation and random matrices. Despite the difference between these two models, they highlight common interests and phenomena. The last-passage percolation or LPP is a growth model in the lattice plane. It is part of a wide list of growth models and is used to model phenomena in various fields: tandem queues in series, totally asymmetric simple exclusion process, etc. In the first part of this thesis, we focused on LPP's large deviation properties. Later in this part, we studied the LPP's transversal fluctuations. Alongside the work on growth models, we studied another subject that also emerges in the world of physics: random matrices. These matrices are divided into two main categories introduced twenty years apart: the sample covariance matrices and Wigner's matrices. The extent of the scope of these matrices is so large we can meet almost all the sciences: probability, combinatorics, atomic physics, multivariate statistics, telecommunications, representation theory, etc. Among the most studied mathematical objects, we list the joint distribution of eigenvalues, the empirical spectral density, the eigenvalues spacing, the largest eigenvalue and eigenvectors. For example, in quantum mechanics, the eigenvalues of a GUE matrix model the energy levels of an electron around the nucleus while the eigenvector associated to the largest eigenvalue of a sample covariance matrix indicates the direction or the main axis in data analysis. As with the LPP, we studied large deviation properties of the largest eigenvalue for some sample covariance matrices. This study could have applications in statistics. Despite the apparent difference, the random matrix theory is strictly related to directed percolation model. Correlation structures are similar in some cases. The convergence of fluctuations to the famous Tracy-Widom law in both cases illustrates the connection between these two models
Petitjean, Julien. "Contributions au traitement spatio-temporel fondé sur un modèle autorégressif vectoriel des interférences pour améliorer la détection de petites cibles lentes dans un environnement de fouillis hétérogène Gaussien et non Gaussien." Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14157/document.
This dissertation deals with space-time adaptive processing in the radar’s field. To improve the detection’s performances, this approach consists in maximizing the ratio between the target’s power and the interference’s one, i.e. the thermal noise and the clutter. Several variants of its algorithm exist, one of them is based on multichannel autoregressive modelling of interferences. Its main problem lies in the estimation of autoregressive matrices with training data and guides our research’s work. Especially, our contribution is twofold.On the one hand, when thermal noise is considered negligible, autoregressive matrices are estimated with fixed point method. Thus, the algorithm is robust against non-gaussian clutter.On the other hand, a new modelling of interferences is proposed. The clutter and thermal noise are separated : the clutter is considered as a multichannel autoregressive process which is Gaussian and disturbed by the white thermal noise. Thus, new estimation’s algorithms are developed. The first one is a blind estimation based on errors in variable methods. Then, recursive approaches are proposed and used extension of Kalman filter : the extended Kalman filter and the Sigma Point Kalman filter (UKF and CDKF), and the H∞ filter. A comparative study on synthetic and real data with Gausian and non Gaussian clutter is carried out to show the relevance of the different algorithms about detection’s probability
Schroeder, Pascal. "Performance guaranteeing algorithms for solving online decision problems in financial systems." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0143.
This thesis contains several online financial decision problems and their solutions. The problems are formulated as online problems (OP) and online algorithms (OA) are created to solve them. Due to the fact that there can be various OA for the same OP, there must be some criteria with which one can make statements about the quality of an OA. In this thesis these criteria are the competitive ratio (c), the competitive difference (cd) and the numerical performance. An OA with a lower c is preferable to another one with a higher value. An OA that has the lowest c is called optimal. We consider the following OPS. The online conversion problem (OCP), the online portfolio selection problem (PSP) and the cash management problem (CMP). After the introductory chapter, the OPs, the notation and the state of the art in the field of OPs is presented. In the third chapter, three variants of the OCP with interrelated prices are solved. In the fourth chapter the time series search with interrelated prices is revisited and new algorithms are created. At the end of the chapter, the optimal OA k-DIV for the general k-max search with interrelated prices is developed. In Chapter 5 the PSP with interrelated prices is solved. The created OA OPIP is optimal. Using the idea of OPIP, an optimal OA for the two-way trading is created (OCIP). Having OCIP, an optimal OA for the bi-directional search knowing the values of θ_1 and θ_2 is created (BUND). For unknown θ_1 and θ_2, the optimal OA RUNis created. The chapter ends with an empirical (for OPIP) and experimental (for OCIP, BUND and RUN) testing. Chapters 6 and 7 deal with the CMP. In both of them, a numerical testing is done in order to compare the numerical performance of the new OAs to the one of the already established ones. In Chapter 6 an optimal OA is constructed; in Chapter 7, OAs are designed which minimize cd. The OA BCSID solves the CMP with interrelated demands to optimality. The OA aBBCSID solves the CMP when the values of de θ_1, θ_2,m and M are known; however, this OA is not optimal. In Chapter 7 the CMP is solved, knowing m and M and minimizing cd (OA MRBD). For the interrelated demands, a heuristic OA (HMRID) and a cd-minimizing OA (MRID) is presented. HMRID is good compromise between the numerical performance and the minimization of cd. The thesis concludes with a short discussion about shortcomings of the considered OPs and the created OAs. Then some remarks about future research possibilities in this field are given
Fournier, Alexandre. "Détection et classification de changements sur des scènes urbaines en télédétection." Phd thesis, 2008. http://tel.archives-ouvertes.fr/tel-00463593.
Vincent, Pascal. "Modèles à noyaux à structure locale." Thèse, 2003. http://hdl.handle.net/1866/14543.
Sanka, Norbert Bertrand. "Étude comparative et choix optimal du nombre de classes en classification et réseaux de neurones : application en science des données." Thèse, 2021. http://depot-e.uqtr.ca/id/eprint/9662/1/eprint9662.pdf.
Gaudet, Sylvain. "L'évaluation en laboratoire et sur le terrain vers la prévention des blessures à l’épaule chez les athlètes de sports aquatiques et d’armée du bras." Thèse, 2018. http://hdl.handle.net/1866/21805.