Contents
Academic literature on the topic 'Algorithme de réduction d’erreur'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Algorithme de réduction d’erreur.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Algorithme de réduction d’erreur"
Jaquet, David-Olivier. "Domaines de Voronoï et algorithme de réduction des formes quadratiques définies positives." Journal de Théorie des Nombres de Bordeaux 2, no. 1 (1990): 163–215. http://dx.doi.org/10.5802/jtnb.25.
Full textFenoglietto, P., L. Bedos, P. Dubois, A. Fenoglietto, J. Dubois, N. Aillères, and A. David. "Prothèses dentaires et algorithme de réduction des artéfacts métalliques pour les patients ORL." Cancer/Radiothérapie 18, no. 5-6 (October 2014): 610. http://dx.doi.org/10.1016/j.canrad.2014.07.071.
Full textBedos, L., N. Aillères, D. Azria, and P. Fenoglietto. "Évaluation de la précision des nombres Hounsfield d’un nouvel algorithme de réduction des artéfacts métalliques en radiothérapie." Cancer/Radiothérapie 18, no. 5-6 (October 2014): 611. http://dx.doi.org/10.1016/j.canrad.2014.07.074.
Full textMatignon, Michel. "Vers un algorithme pour la réduction stable des revêtements p-cycliques de la droite projective sur un corps p-adique." Mathematische Annalen 325, no. 2 (February 2003): 323–54. http://dx.doi.org/10.1007/s00208-002-0387-4.
Full textJoy, Melanie S., Gary R. Matzke, Deborah K. Armstrong, Michael A. Marx, and Barbara J. Zarowitz. "A Primer on Continuous Renal Replacement Therapy for Critically Ill Patients." Annals of Pharmacotherapy 32, no. 3 (March 1998): 362–75. http://dx.doi.org/10.1345/aph.17105.
Full textAchour, Karim, Nadia Zenati, and Oualid Djekoune. "Contribution to image restoration using a neural network model." Revue Africaine de la Recherche en Informatique et Mathématiques Appliquées Volume 1, 2002 (September 22, 2002). http://dx.doi.org/10.46298/arima.1829.
Full textBodart, Vincent, Thomas Lambert, Philippe Ledent, and Vincent Scourneau. "Numéro 62 - octobre 2008." Regards économiques, October 12, 2018. http://dx.doi.org/10.14428/regardseco.v1i0.15623.
Full textBodart, Vincent, Thomas Lambert, Philippe Ledent, and Vincent Scourneau. "Numéro 62 - octobre 2008." Regards économiques, October 12, 2018. http://dx.doi.org/10.14428/regardseco2008.10.01.
Full textFisher, Clément, Arnaud Recoquillay, Olivier Mesnil, and Oscar d'Almeida. "Détection de défauts par ondes guidées robuste et auto-référencée : application aux pièces en composite tissé de forme complexe." e-journal of nondestructive testing 28, no. 9 (September 2023). http://dx.doi.org/10.58286/28509.
Full textCockx, Bart, Muriel Dejemeppe, and Bruno Van der Linden. "Numéro 49 - janvier 2007." Regards économiques, October 12, 2018. http://dx.doi.org/10.14428/regardseco.v1i0.15753.
Full textDissertations / Theses on the topic "Algorithme de réduction d’erreur"
Khoder, Jihan. "Nouvel Algorithme pour la Réduction de la Dimensionnalité en Imagerie Hyperspectrale." Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2013. http://tel.archives-ouvertes.fr/tel-00939018.
Full textKhoder, Jihan Fawaz. "Nouvel algorithme pour la réduction de la dimensionnalité en imagerie hyperspectrale." Versailles-St Quentin en Yvelines, 2013. http://www.theses.fr/2013VERS0037.
Full textIn hyperspectral imaging, the volumes of data acquired often reach the gigabyte for a single scene observed. Therefore, the analysis of these data complex physical content must go with a preliminary step of dimensionality reduction. Therefore, the analyses of these data of physical content complex go preliminary with a step of dimensionality reduction. This reduction has two objectives, the first is to reduce redundancy and the second facilitates post-treatment (extraction, classification and recognition) and therefore the interpretation of the data. Automatic classification is an important step in the process of knowledge extraction from data. It aims to discover the intrinsic structure of a set of objects by forming groups that share similar characteristics. In this thesis, we focus on dimensionality reduction in the context of unsupervised classification of spectral bands. Different approaches exist, such as those based on projection (linear or nonlinear) of high-dimensional data in a representation subspaces or on the techniques of selection of spectral bands exploiting of the criteria of complementarity-redundant information do not allow to preserve the wealth of information provided by this type of data
Allier, Pierre-Eric. "Contrôle d’erreur pour et par les modèles réduits PGD." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLN063/document.
Full textMany structural mechanics problems require the resolution of several similar numerical problems. An iterative model reduction approach, the Proper Generalized Decomposition (PGD), enables the control of the main solutions at once, by the introduction of additional parameters. However, a major drawback to its use in the industrial world is the absence of a robust error estimator to measure the quality of the solutions obtained.The approach used is based on the concept of constitutive relation error. This method consists in constructing admissible fields, thus ensuring the conservative and guaranteed aspect of the estimation of the error by reusing the maximum number of tools used in the finite elements framework. The ability to quantify the importance of the different sources of error (reduction and discretization) allows to control the main strategies of PGD resolution.Two strategies have been proposed in this work. The first was limited to post-processing a PGD solution to construct an estimate of the error committed, in a non-intrusively way for existing PGD codes. The second consists of a new PGD strategy providing an improved approximation associated with an estimate of the error committed. The various comparative studies are carried out in the context of linear thermal and elasticity problems.This work also allowed us to optimize the admissible fields construction methods by substituting the resolution of many similar problems by a PGD solution, exploited as a virtual chart
Zapien, Durand-Viel Karina. "Algorithme de chemin de régularisation pour l'apprentissage statistique." Phd thesis, INSA de Rouen, 2009. http://tel.archives-ouvertes.fr/tel-00557888.
Full textDriant, Thomas. "Réduction de la traînée aérodynamique et refroidissement d'un tricycle hybride par optimisation paramétrique." Thèse, Université de Sherbrooke, 2015. http://hdl.handle.net/11143/6990.
Full textDelestre, Barbara. "Reconstruction 3D de particules dans un écoulement par imagerie interférométrique." Electronic Thesis or Diss., Normandie, 2022. http://www.theses.fr/2022NORMR116.
Full textFor many industrial or environmental applications, it is important to measure the size and volume of irregularly shaped particles. This is for example the case in the context of aircraft icing which occurs during flights, where it is necessary to measure in situ the water content and the ice content in the troposphere in order to detect and avoid risk areas. Our interest has been on interferometric out-of-focus imaging, an optical technique offering many advantages (wide measurement field, extended range of sizes studied [50 μm: a few millimeters], distance particle / measuring device several tens of centimeters ...). During this thesis, we showed that the 3D reconstruction of a particle can be done from a set of three interferometric images of this particle (under three perpendicular viewing angles). This can be done using the error reduction (ER) algorithm which allows to obtain the function f(x,y) from the measurements of the modulus of its 2D Fourier transform |F(u,v)| , by reconstructing the phase of its 2D Fourier transform. The implementation of this algorithm allowed us to reconstruct the shape of irregular particles from their interferometric images. Experimental demonstrations were carried out using a specific assembly based on the use of a micro-mirror array (DMD) which generates the interferometric images of programmable rough particles. The results obtained are very encouraging. The volumes obtained remain quite close to the real volume of the particle and the reconstructed 3D shapes give us a good idea of the general shape of the particle studied even in the most extreme cases where the orientation of the particle is arbitrary. Finally, we showed that an accurate 3D reconstruction of a "programmed" rough particle can be performed from a set of 120 interferometric images
Karina, Zapien. "Algorithme de Chemin de Régularisation pour l'apprentissage Statistique." Phd thesis, INSA de Rouen, 2009. http://tel.archives-ouvertes.fr/tel-00422854.
Full textL'approche habituelle pour déterminer ces hyperparamètres consiste à utiliser une "grille". On se donne un ensemble de valeurs possibles et on estime, pour chacune de ces valeurs, l'erreur de généralisation du meilleur modèle. On s'intéresse, dans cette thèse, à une approche alternative consistant à calculer l'ensemble des solutions possibles pour toutes les valeurs des hyperparamètres. C'est ce qu'on appelle le chemin de régularisation. Il se trouve que pour les problèmes d'apprentissage qui nous intéressent, des programmes quadratiques paramétriques, on montre que le chemin de régularisation associé à certains hyperparamètres est linéaire par morceaux et que son calcul a une complexité numérique de l'ordre d'un multiple entier de la complexité de calcul d'un modèle avec un seul jeu hyper-paramètres.
La thèse est organisée en trois parties. La première donne le cadre général des problèmes d'apprentissage de type SVM (Séparateurs à Vaste Marge ou Support Vector Machines) ainsi que les outils théoriques et algorithmiques permettant d'appréhender ce problème. La deuxième partie traite du problème d'apprentissage supervisé pour la classification et l'ordonnancement dans le cadre des SVM. On montre que le chemin de régularisation de ces problèmes est linéaire par morceaux. Ce résultat nous permet de développer des algorithmes originaux de discrimination et d'ordonnancement. La troisième partie aborde successivement les problèmes d'apprentissage semi supervisé et non supervisé. Pour l'apprentissage semi supervisé, nous introduisons un critère de parcimonie et proposons l'algorithme de chemin de régularisation associé. En ce qui concerne l'apprentissage non supervisé nous utilisons une approche de type "réduction de dimension". Contrairement aux méthodes à base de graphes de similarité qui utilisent un nombre fixe de voisins, nous introduisons une nouvelle méthode permettant un choix adaptatif et approprié du nombre de voisins.
Nouet, Christophe. "Réduction de l'ordre des systèmes continus, linéaires, via un processus d'orthogonalisation et un algorithme de gauss newton." Brest, 1994. http://www.theses.fr/1994BRES2040.
Full textSimon, Frank. "Contrôle actif appliqué à la réduction du bruit interne d'aéronefs." Toulouse, ENSAE, 1997. http://www.theses.fr/1997ESAE0002.
Full textZapién, Arreola Karina. "Algorithme de chemin de régularisation pour l'apprentissage statistique." Thesis, Rouen, INSA, 2009. http://www.theses.fr/2009ISAM0001/document.
Full textThe selection of a proper model is an essential task in statistical learning. In general, for a given learning task, a set of parameters has to be chosen, each parameter corresponds to a different degree of “complexity”. In this situation, the model selection procedure becomes a search for the optimal “complexity”, allowing us to estimate a model that assures a good generalization. This model selection problem can be summarized as the calculation of one or more hyperparameters defining the model complexity in contrast to the parameters that allow to specify a model in the chosen complexity class. The usual approach to determine these parameters is to use a “grid search”. Given a set of possible values, the generalization error for the best model is estimated for each of these values. This thesis is focused in an alternative approach consisting in calculating the complete set of possible solution for all hyperparameter values. This is what is called the regularization path. It can be shown that for the problems we are interested in, parametric quadratic programming (PQP), the corresponding regularization path is piece wise linear. Moreover, its calculation is no more complex than calculating a single PQP solution. This thesis is organized in three chapters, the first one introduces the general setting of a learning problem under the Support Vector Machines’ (SVM) framework together with the theory and algorithms that allow us to find a solution. The second part deals with supervised learning problems for classification and ranking using the SVM framework. It is shown that the regularization path of these problems is piecewise linear and alternative proofs to the one of Rosset [Ross 07b] are given via the subdifferential. These results lead to the corresponding algorithms to solve the mentioned supervised problems. The third part deals with semi-supervised learning problems followed by unsupervised learning problems. For the semi-supervised learning a sparsity constraint is introduced along with the corresponding regularization path algorithm. Graph-based dimensionality reduction methods are used for unsupervised learning problems. Our main contribution is a novel algorithm that allows to choose the number of nearest neighbors in an adaptive and appropriate way contrary to classical approaches based on a fix number of neighbors