Dissertations / Theses on the topic 'Caractéristique extraction'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Caractéristique extraction.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Pacheco, Do Espirito Silva Caroline. "Feature extraction and selection for background modeling and foreground detection." Thesis, La Rochelle, 2017. http://www.theses.fr/2017LAROS005/document.
Full textIn this thesis, we present a robust descriptor for background subtraction which is able to describe texture from an image sequence. The descriptor is less sensitive to noisy pixels and produces a short histogram, while preserving robustness to illumination changes. Moreover, a descriptor for dynamic texture recognition is also proposed. This descriptor extracts not only color information, but also a more detailed information from video sequences. Finally, we present an ensemble for feature selection approach that is able to select suitable features for each pixel to distinguish the foreground objects from the background ones. Our proposal uses a mechanism to update the relative importance of each feature over time. For this purpose, a heuristic approach is used to reduce the complexity of the background model maintenance while maintaining the robustness of the background model. However, this method only reaches the highest accuracy when the number of features is huge. In addition, each base classifier learns a feature set instead of individual features. To overcome these limitations, we extended our previous approach by proposing a new methodology for selecting features based on wagging. We also adopted a superpixel-based approach instead of a pixel-level approach. This does not only increases the efficiency in terms of time and memory consumption, but also can improves the segmentation performance of moving objects
Mokrane, Abdenour. "Représentation de collections de documents textuels : application à la caractéristique thématique." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2006. http://tel.archives-ouvertes.fr/tel-00401651.
Full textNguyen, Thanh Tuan. "Représentations efficaces des textures dynamiques." Electronic Thesis or Diss., Toulon, 2020. https://bu.univ-tln.fr/files/userfiles/file/intranet/travuniv/theses/sciences/2020/2020_Nguyen_ThanhTuan.pdf.
Full textRepresentation of dynamic textures (DTs), well-known as a sequence of moving textures, is a challenge in video analysis for various computer vision applications. It is partly due to disorientation of motions, the negative impacts of the well-known issues on capturing turbulent features: noise, changes of environment, illumination, similarity transformations, etc. In this work, we introduce significant solutions in order to deal with above problems. Accordingly, three streams of those are proposed for encoding DTs: i) based on dense trajectories extracted from a given video; ii) based on robust responses extracted by moment models; iii) based on filtered outcomes which are computed by variants of Gaussian-filtering kernels. In parallel, we also propose several discriminative descriptors to capture spatio-temporal features for above DT encodings. For DT representation based on dense trajectories, we firstly extract dense trajectories from a given video. Motion points along the paths of dense trajectories are then encoded by our xLVP operator, an important extension of Local Vector Patterns (LVP) in a completed encoding context, in order to capture directional dense-trajectory-based features for DT representation.For DT description based on moment models, motivated by the moment-image model, we propose a novel model of moment volumes based on statistical information of spherical supporting regions centered at a voxel. Two these models are then taken into account video analysis to point out moment-based images/volumes. In order to encode the moment-based images, we address CLSP operator, a variant of completed local binary patterns (CLBP). In the meanwhile, our xLDP, an important extension of Local Derivative Patterns (LDP) in a completed encoding context, is introduced to capture spatio-temporal features of the moment-volume-based outcomes. For DT representation based on the Gaussian-based filterings, we will investigate many kinds of filterings as pre-processing analysis of a video to point out its filtered outcomes. After that, these outputs are encoded by discriminative operators to structure DT descriptors correspondingly. More concretely, we exploit the Gaussian-based kernel and variants of high-order Gaussian gradients for the filtering analysis. Particularly, we introduce a novel filtering kernel (DoDG) in consideration of the difference of Gaussian gradients, which allows to point out robust DoDG-filtered components to construct prominent DoDG-based descriptors in small dimension. In parallel to the Gaussian-based filterings, some novel operators will be introduced to meet different contexts of the local DT encoding: CAIP, an adaptation of CLBP to fix the close-to-zero problem caused by separately bipolar features; LRP, based on a concept of a square cube of local neighbors sampled at a center voxel; CHILOP, a generalized formulation of CLBP to adequately investigate local relationships of hierarchical supporting regions. Experiments for DT recognition have validated that our proposals significantly perform in comparison with state of the art. Some of which have performance being very close to deep-learning approaches, expected as one of appreciated solutions for mobile applications due to their simplicity in computation and their DT descriptors in a small number of bins
Nguyen, Huu-Tuan. "Contributions to facial feature extraction for face recognition." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT034/document.
Full textCentered around feature extraction, the core task of any Face recognition system, our objective is devising a robust facial representation against major challenges, such as variations of illumination, pose and time-lapse and low resolution probe images, to name a few. Besides, fast processing speed is another crucial criterion. Towards these ends, several methods have been proposed through out this thesis. Firstly, based on the orientation characteristics of the facial information and important features, like the eyes and mouth, a novel variant of LBP, referred as ELBP, is designed for encoding micro patterns with the usage of an horizontal ellipse sample. Secondly, ELBP is exploited to extract local features from oriented edge magnitudes images. By this, the Elliptical Patterns of Oriented Edge Magnitudes (EPOEM) description is built. Thirdly, we propose a novel feature extraction method so called Patch based Local Phase Quantization of Monogenic components (PLPQMC). Lastly, a robust facial representation namely Local Patterns of Gradients (LPOG) is developed to capture meaningful features directly from gradient images. Chiefs among these methods are PLPQMC and LPOG as they are per se illumination invariant and blur tolerant. Impressively, our methods, while offering comparable or almost higher results than that of existing systems, have low computational cost and are thus feasible to deploy in real life applications
Vachier, Corinne. "Extraction de caractéristiques, segmentation d'image et morphologie mathématique." Phd thesis, École Nationale Supérieure des Mines de Paris, 1995. http://pastel.archives-ouvertes.fr/pastel-00004230.
Full textAuclair, Fortier Marie-Flavie. "Extraction de caractéristiques contours multispectraux, contours de texture et routes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0021/MQ56853.pdf.
Full textHanifi, Majdoulayne. "Extraction de caractéristiques de texture pour la classification d'images satellites." Toulouse 3, 2009. http://thesesups.ups-tlse.fr/675/.
Full textThis thesis joins in the general frame of the multimedia data processing. We particularly exploited the satellite images for the application of these treatments. We were interested in the extraction of variables and texturelles characteristics; we proposed a new method of pre-treatment of textures to improve the extraction of these characteristic attributes. The increase of the resolution of the satellites disrupted, paradoxically, the researchers during the first classifications on high-resolution data. The very homogeneous maps, obtained until then on average resolution, became very split up and difficult to use with the same algorithms of classification. A way of remedying this problem consists in characterizing the pixel in the classification by parameters measuring the spatial organization of the pixels of its neighbourhood. There are several approaches in the analysis of texture in the images. Within the framework of the satellite images, the statistical approach seems to be usually retained, as well as the methods of the cooccurrence matrix and the corrélogramme, based on the statistical analysis in the second order (in the sense of the probability on couples of pixels). And they are the last two methods on which we are going to base to extract the texturelle information from it in the form of a vector. These matrices present some drawbacks, such as the required memory size and the high calculation time of the parameters. To by-pass this problem, we looked for a method of reduction of the number of grey levels called rank coding (allowing to pass, at first of 256 levels at 9 grey levels, and then to improve the quality of the image, passing to 16 grey levels, while keeping the structure and the texture of the image. This thesis allowed to show that the method of coding is a better way to compress an image without losing however of the texturelle information. It allows to reduce the size of the data, what will reduce the calculation time of the characteristics
Zubiolo, Alexis. "Extraction de caractéristiques et apprentissage statistique pour l'imagerie biomédicale cellulaire et tissulaire." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4117/document.
Full textThe purpose of this Ph.D. thesis is to study the classification based on morphological features of cells and tissues taken from biomedical images. The goal is to help medical doctors and biologists better understand some biological phenomena. This work is spread in three main parts corresponding to the three typical problems in biomedical imaging tackled. The first part consists in analyzing endomicroscopic videos of the colon in which the pathological class of the polyps has to be determined. This task is performed using a supervised multiclass machine learning algorithm combining support vector machines and graph theory tools. The second part concerns the study of the morphology of mice neurons taken from fluorescent confocal microscopy. In order to obtain a rich information, the neurons are imaged at two different magnifications, the higher magnification where the soma appears in details, and the lower showing the whole cortex, including the apical dendrites. On these images, morphological features are automatically extracted with the intention of performing a classification. The last part is about the multi-scale processing of digital histology images in the context of kidney cancer. The vascular network is extracted and modeled by a graph to establish a link between the architecture of the tumor and its pathological class
Alioua, Nawal. "Extraction et analyse des caractéristiques faciales : application à l'hypovigilance chez le conducteur." Thesis, Rouen, INSA, 2015. http://www.theses.fr/2015ISAM0002/document.
Full textStudying facial features has attracted increasing attention in both academic and industrial communities. Indeed, these features convey nonverbal information that plays a key role in humancommunication. Moreover, they are very useful to allow human-machine interactions. Therefore, the automatic study of facial features is an important task for various applications includingrobotics, human-machine interfaces, behavioral science, clinical practice and monitoring driver state. In this thesis, we focus our attention on monitoring driver state through its facial features analysis. This problematic solicits a universal interest caused by the increasing number of road accidents, principally induced by deterioration in the driver vigilance level, known as hypovigilance. Indeed, we can distinguish three hypovigilance states. The first and most critical one is drowsiness, which is manifested by an inability to keep awake and it is characterized by microsleep intervals of 2-6 seconds. The second one is fatigue, which is defined by the increasing difficulty of maintaining a task and it is characterized by an important number of yawns. The third and last one is the inattention that occurs when the attention is diverted from the driving activity and it is characterized by maintaining the head pose in a non-frontal direction.The aim of this thesis is to propose facial features based approaches allowing to identify driver hypovigilance. The first approach was proposed to detect drowsiness by identifying microsleepintervals through eye state analysis. The second one was developed to identify fatigue by detecting yawning through mouth analysis. Since no public hypovigilance database is available,we have acquired and annotated our own database representing different subjects simulating hypovigilance under real lighting conditions to evaluate the performance of these two approaches. Next, we have developed two driver head pose estimation approaches to detect its inattention and also to determine its vigilance level even if the facial features (eyes and mouth) cannot be analyzed because of non-frontal head positions. We evaluated these two estimators on the public database Pointing'04. Then, we have acquired and annotated a driver head pose database to evaluate our estimators in real driving conditions
Amara, Mounir. "Segmentation de tracés manuscrits. Application à l'extraction de primitives." Rouen, 1998. http://www.theses.fr/1998ROUES001.
Full textOsty, Guillaume. "Extraction de particularités sur données discrètes issues de numérisation 3D : partitionnement de grands nuages de points." Cachan, Ecole normale supérieure, 2002. http://www.theses.fr/2002DENS0003.
Full textBizeau, Alexandre. "Segmentation et extraction de caractéristiques des vaisseaux sanguins cérébraux à l'aide de l'IRM." Mémoire, Université de Sherbrooke, 2017. http://hdl.handle.net/11143/10259.
Full textAbstract : The neurovascular coupling is a growing field; it studies the effects of cerebral activity on the behaviour of cerebral blood flow (CBF) and the blood vessels themselves. With the help of magnetic resonance imaging (MRI), it is possible to obtain images such as susceptibility weighted imaging (SWI) to see the veins or time-of-flight magnetic resonance angiography (TOF MRA) to visualize the arteries. These images allow having a structural representation of vessels in the brain. This thesis presents a method to segment blood vessels from structural images and extract their features. Using the segmentation mask, it is possible to calculate the diameter of the vessels as well as their length. With the help of such automatic segmentation tools, we conducted a study to analyze the behaviour of blood vessels during neuronal activities. Due to visual stimulation, we have acquired two images; one at rest and the other with stimulation. We compare the diameter in each of the images and obtain vasodilation in millimeters, but also as a percentage in each voxel. We also calculated the distance between the activation site and each voxel to see the magnitude of the vasodilation function of the distance. All this provides a better understanding of the vascular system of the human brain.
Rousseau, Marie-Ève. "Détection de visages et extraction de caractéristiques faciales dans des images de scènes complexes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq33745.pdf.
Full textNoorzadeh, Saman. "Extraction de l'ECG du foetus et de ses caractéristiques grâce à la multi-modalité." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT135/document.
Full textFetal health must be carefully monitored during pregnancy to detect early fetal cardiac diseases, and provide appropriate treatment. Technological development allows a monitoring during pregnancy using the non-invasive fetal electrocardiogram (ECG). Noninvasive fetal ECG is a method not only to detect fetal heart rate, but also to analyze the morphology of fetal ECG, which is now limited to analysis of the invasive ECG during delivery. However, the noninvasive fetal ECG recorded from the mother's abdomen is contaminated with several noise sources among which the maternal ECG is the most prominent.In the present study, the problem of noninvasive fetal ECG extraction is tackled using multi-modality. Beside ECG signal, this approach benefits from the Phonocardiogram (PCG) signal as another signal modality, which can provide complementary information about the fetal ECG.A general method for quasi-periodic signal analysis and modeling is first described and its application to ECG denoising and fetal ECG extraction is explained. Considering the difficulties caused by the synchronization of the two modalities, the event detection in the quasi-periodic signals is also studied which can be specified to the detection of the R-peaks in the ECG signal.The method considers both clinical and signal processing aspects of the application on ECG and PCG signals. These signals are introduced and their characteristics are explained. Then, using PCG signal as the reference, the Gaussian process modeling is employed to provide the possibility of flexible models as nonlinear estimations. The method also tries to facilitate the practical implementation of the device by using the less possible number of channels and also by using only 1-bit reference signal.The method is tested on synthetic data and also on real data that is recorded to provide a synchronous multi-modal data set.Since a standard agreement for the acquisition of these modalities is not yet taken into much consideration, the factors which influence the signals in recording procedure are introduced and their difficulties and effects are investigated.The results show that the multi-modal approach is efficient in the detection of R-peaks and so in the extraction of fetal heart rate, and it also provides the results about the morphology of fetal ECG
Bouyanzer, Hassane. "Extraction automatique de caractéristiques sur des images couleurs : application à la mesure de paramètres." Rouen, 1992. http://www.theses.fr/1992ROUES059.
Full textQuach, Kim Anh. "Extraction de caractéristiques de l’activité ambulatoire du patient par fusion d’informations de centrales inertielles." Thesis, Lyon 1, 2012. http://www.theses.fr/2012LYO10059.
Full textThe increase in the elderly population poses today many questions on the care of elderly so that they can live a long time in autonomy. The quantification of the daily activities plays a big role for the evaluation of the good health and the early detection of the signs of loss of autonomy. In this thesis, we present our work and the results which we obtained on the development of actimetric data by using kinematic sensors (accelerometers, magnetometers and gyroscopes tri-axes) carried by the subject or integrated in a smartphone. We approach the detection of the postures like a first stage towards the follow-up of the “Activities of Daily Living” (ADL). We then consider the personalization of a reference frame of activities suitable for the subject to improve detections. Finally we propose the development of a single index which summarizes the level of realization of the daily activities of the subject and thus makes it possible to evaluate the total trend of the actimetry of the subject. The actimetry has a great potential on the market of technologies for the maintenance to residence of the elderly
Angoustures, Mark. "Extraction automatique de caractéristiques malveillantes et méthode de détection de malware dans un environnement réel." Thesis, Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1221.
Full textTo cope with the large volume of malware, researchers have developed automatic dynamic tools for the analysis of malware like the Cuckoo sandbox. This analysis is partially automatic because it requires the intervention of a human expert in security to detect and extract suspicious behaviour. In order to avoid this tedious work, we propose a methodology to automatically extract dangerous behaviors. First of all, we generate activity reports from malware from the sandbox Cuckoo. Then, we group malware that are part of the same family using the Avclass algorithm. We then weight the the most singular behaviors of each malware family obtained previously. Finally, we aggregate malware families with similar behaviors by the LSA method.In addition, we detail a method to detect malware from the same type of behaviors found previously. Since this detection isperformed in real environment, we have developed probes capable of generating traces of program behaviours in continuous execution. From these traces obtained, we let’s build a graph that represents the tree of programs in execution with their behaviors. This graph is updated incrementally because the generation of new traces. To measure the dangerousness of programs, we execute the personalized PageRank algorithm on this graph as soon as it is updated. The algorithm gives a dangerousness ranking processes according to their suspicious behaviour. These scores are then reported on a time series to visualize the evolution of this dangerousness score for each program. Finally, we have developed several alert indicators of dangerous programs in execution on the system
Angoustures, Mark. "Extraction automatique de caractéristiques malveillantes et méthode de détection de malware dans un environnement réel." Electronic Thesis or Diss., Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1221.
Full textTo cope with the large volume of malware, researchers have developed automatic dynamic tools for the analysis of malware like the Cuckoo sandbox. This analysis is partially automatic because it requires the intervention of a human expert in security to detect and extract suspicious behaviour. In order to avoid this tedious work, we propose a methodology to automatically extract dangerous behaviors. First of all, we generate activity reports from malware from the sandbox Cuckoo. Then, we group malware that are part of the same family using the Avclass algorithm. We then weight the the most singular behaviors of each malware family obtained previously. Finally, we aggregate malware families with similar behaviors by the LSA method.In addition, we detail a method to detect malware from the same type of behaviors found previously. Since this detection isperformed in real environment, we have developed probes capable of generating traces of program behaviours in continuous execution. From these traces obtained, we let’s build a graph that represents the tree of programs in execution with their behaviors. This graph is updated incrementally because the generation of new traces. To measure the dangerousness of programs, we execute the personalized PageRank algorithm on this graph as soon as it is updated. The algorithm gives a dangerousness ranking processes according to their suspicious behaviour. These scores are then reported on a time series to visualize the evolution of this dangerousness score for each program. Finally, we have developed several alert indicators of dangerous programs in execution on the system
El, Ferchichi Sabra. "Sélection et extraction d'attributs pour les problèmes de classification." Thesis, Lille 1, 2013. http://www.theses.fr/2013LIL10042/document.
Full textScientific advances in recent years have produced databases increasingly large and complex. This brings some classifiers to generate classification rules based on irrelevant features, and thus degrade the quality of classification and generalization ability. In this context, we propose a new method for extracting features to improve the quality of classification. Our method performs a clustering of features to find groups of similar features. A new similarity measure based on trend analysis is then designed to find similarity between features in their behavior. Indeed, our method aims to reduce redundant information while identifying similar trends in features vectors throughout the database. Following the construction of clusters, a linear transformation is applied on each group to obtain a single representative. To find an optimal center, we propose to maximize the Mutual Information (IM) as a measure of dependency between groups of features and the desired center. Experiments on real and synthetic data show that our method achieved good classification performance in comparison with other methods of extracting features. Our method has also been applied to the industrial diagnosis of a complex chemical process Tennessee Eastman Process (TEP)
Lozano, Vega Gildardo. "Image-based detection and classification of allergenic pollen." Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS031/document.
Full textThe correct classification of airborne pollen is relevant for medical treatment of allergies, and the regular manual process is costly and time consuming. An automatic processing would increase considerably the potential of pollen counting. Modern computer vision techniques enable the detection of discriminant pollen characteristics. In this thesis, a set of relevant image-based features for the recognition of top allergenic pollen taxa is proposed and analyzed. The foundation of our proposal is the evaluation of groups of features that can properly describe pollen in terms of shape, texture, size and apertures. The features are extracted on typical brightfield microscope images that enable the easy reproducibility of the method. A process of feature selection is applied to each group for the determination of relevance.Regarding apertures, a flexible method for detection, localization and counting of apertures of different pollen taxa with varying appearances is proposed. Aperture description is based on primitive images following the Bag-of-Words strategy. A confidence map is built from the classification confidence of sampled regions. From this map, aperture features are extracted, which include the count of apertures. The method is designed to be extended modularly to new aperture types employing the same algorithm to build individual classifiers.The feature groups are tested individually and jointly on of the most allergenic pollen taxa in Germany. They demonstrated to overcome the intra-class variance and inter-class similarity in a SVM classification scheme. The global joint test led to accuracy of 98.2%, comparable to the state-of-the-art procedures
Bodi, Geoffroy. "Débruitage, déconvolution et extraction de caractéristiques de signaux dans le domaine temporel pour imagerie biomédicale optique." Mémoire, Université de Sherbrooke, 2010. http://savoirs.usherbrooke.ca/handle/11143/1588.
Full textBonnevay, Stéphane. "Extraction de caractéristiques de texture par codages des extrema de gris et traitement prétopologique des images." Lyon 1, 1997. http://www.theses.fr/1997LYO10276.
Full textClémençon, Boris. "Extraction des lignes caractéristiques géométriques des surfaces paramétrées et application à la génération de maillages surfaciques." Troyes, 2008. http://www.theses.fr/2008TROY0004.
Full textA major issue for meshing a given analytical surface is to guarantee the accuracy of the underlying geometry. This can be achieved in particular by adapting the mesh to the surface curvature. Without curvature adaptation, parasitic undulations appear in areas where the specified element size is locally large with respect to the minimum radius of curvature : this phenomenon is called aliasing. The classical approach to reduce this phenomenon is to locally decrease the edge size, at the cost of a greater number of elements. We propose to adapt the mesh to the geometry by locating the vertices and the edges along the ridges. These lines are the maxima of the principal curvatures in absolute value along their associated line of curvature. We present methods to characterize and extract the ridges in the case of a parametric surface. Singularities such as umbilics and extremal points are discussed. These vertices and discrete lines form a graph represented by a set of edges. Simplified polygonal lines representing significant ridges are extracted from this graph, interpolated and then integrated as internal curves in the parametric domain. The mesh of this parametric domain including these lines is generated and mapped onto the surface. Examples show that taking ridge lines into account avoids the aliasing without increasing the number of elements, and also reduces the gap between the surface and the mesh
Richard, Jean-Michel. "Étude de l'orellanine, toxine de Cortinarius orellanus Fries : extraction, purification, détection, dosage, caractéristiques physico-chimiques, toxicité." Université Joseph Fourier (Grenoble), 1987. http://www.theses.fr/1987GRE18004.
Full textApatean, Anca Ioana. "Contributions à la fusion des informations : application à la reconnaissance des obstacles dans les images visible et infrarouge." Phd thesis, INSA de Rouen, 2010. http://tel.archives-ouvertes.fr/tel-00621202.
Full textCremer, Sandra. "Adapting iris feature extraction and matching to the local and global quality of iris image." Thesis, Evry, Institut national des télécommunications, 2012. http://www.theses.fr/2012TELE0026.
Full textIris recognition has become one of the most reliable and accurate biometric systems available. However its robustness to degradations of the input images is limited. Generally iris based systems can be cut into four steps : segmentation, normalization, feature extraction and matching. Degradations of the input image quality can have repercussions on all of these steps. For instance, they make the segmentation more difficult which can result in normalized iris images that contain distortion or undetected artefacts. Moreover the amount of information available for matching can be reduced. In this thesis we propose methods to improve the robustness of the feature extraction and matching steps to degraded input images. We work with two algorithms for these two steps. They are both based on convolution with 2D Gabor filters but use different techniques for matching. The first part of our work is aimed at controlling the quality and quantity of information selected in the normalized iris images for matching. To this end we defined local and global quality metrics that measure the amount of occlusion and the richness of texture in iris images. We use these measures to determine the position and the number of regions to exploit for feature extraction and matching. In the second part, we study the link between image quality and the performance of the two recognition algoritms just described. We show that the second one is more robust to degraded images that contain artefacts, distortion or a poor iris texture. Finally, we propose a complete system for iris recognition that combines the use of our local and global quality metrics to optimize recognition performance
El, Omari Hafsa. "Extraction des paramètres des modèles du VDMOS à partir des caractéristiques en commutation : comparaison avec les approches classiques." Lyon, INSA, 2003. http://theses.insa-lyon.fr/publication/2003ISAL0040/these.pdf.
Full textThe study is about the analysis and the characterization of the VDMOS. First part of the text recalls the structure, the behavior and the modeling of the VDMOS. A semi-behavioral model, "2KP-model", has been selected. Experimental characterizations have been done in I-V, C-V and switching mode of operation. The role of pulse duration has been studied for quasi-static I-V characterization. Second part describes classical characterization and parameter extraction techniques applied to VDMOS models. Comparisons between simulations and measurements in switching mode operation in an R-L circuit are achieved. Third part corresponds to parameter extraction of the VDMOS model based on R-L switching measurements. Transient measured signals in such conditions yield sufficient information for the parameter extraction. An automatic identification procedure, based on optimization of the difference between measurements and simulation, has been applied. So comparison between PACTE simulations and experiments has been done. The obtained results show equivalence with respect to classical method. The interest of the proposed method is a drastic reduction of measurement noise
Charbuillet, Christophe. "Algorithmes évolutionnistes appliqués à l'extraction de caractéristiques pour la reconnaissance du locuteur." Paris 6, 2008. http://www.theses.fr/2008PA066564.
Full textFaula, Yannick. "Extraction de caractéristiques sur des images acquises en contexte mobile : Application à la reconnaissance de défauts sur ouvrages d’art." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI077.
Full textThe french railway network has a huge infrastructure which is composed of many civil engineering structures. These suffer from degradation of time and traffic and they are subject to a periodic monitoring in order to detect appearance of defects. At the moment, this inspection is mainly done visually by monitoring operators. Several companies test new vectors of photo acquisition like the drone, designed for civil engineering monitoring. In this thesis, the main goal is to develop a system able to detect, localize and save potential defects of the infrastructure. A huge issue is to detect sub-pixel defects like cracks in real time for improving the acquisition. For this task, a local analysis by thresholding is designed for treating large images. This analysis can extract some points of interest (FLASH points: Fast Local Analysis by threSHolding) where a straight line can sneak in. The smart spatial relationship of these points allows to detect and localise fine cracks. The results of the crack detection on concrete degraded surfaces coming from images of infrastructure show better performances in time and robustness than the state-of-art algorithms. Before the detection step, we have to ensure the acquired images have a sufficient quality to make the process. A bad focus or a movement blur are prohibited. We developed a method reusing the preceding computations to assess the quality in real time by extracting Local Binary Pattern (LBP) values. Then, in order to make an acquisition for photogrammetric reconstruction, images have to get a sufficient overlapping. Our algorithm, reusing points of interest of the detection, can make a simple matching between two images without using algorithms as type RANSAC. Our method has invariance in rotation, translation and scale range. After the acquisition, with images with optimal quality, it is possible to exploit methods more expensive in time like convolution neural networks. These are not able to detect cracks in real time but can detect other kinds of damages. However, the lack of data requires the constitution of our database. With approaches of independent classification (classifier SVM one-class), we developed a dynamic system able to evolve in time, detect and then classify the different kinds of damages. No system like ours appears in the literature for the defect detection on civil engineering structure. The implemented works on feature extraction on images for damage detection will be used in other applications as smart vehicle navigation or word spotting
Younes, Lara. "Reconstruction spatio-temporelle de la ville de Reims à partir de documents anciens." Thesis, Reims, 2014. http://www.theses.fr/2014REIMS020.
Full textThis thesis is the first step toward the design of a Volunteered system for the reconstruction and visualization of urban space in the city of Reims through time. In this work, we address the problems of spatio-temporal recognition, reconstruction and georeferencing. This project relies on the use of heterogeneous and sparse iconographic and contextual historical data, particularly a collection of old postcards and the current cadastral map.With the aim of a Volunteered work, it is necessary to provide useful help to the user when bringing new knowledge into the system. A robust solution is required due to multiple changes of the urban model through time. We have developed a solution to meet those needs. This process fits in an incremental approach of reconstruction and will be completed by a user. We propose to extract, reconstruct and visualize 3D multi-façade buildings from old postcards with no knowledge on their real dimensions. The construction of the models is based on 2D façades identification. It can be obtained through image analysis. This identification allows the reconstruction of 3D models, the extraction of their associated 2D façades textures and the enhancement of the system. The features found in the images infer an estimate of their dating, and the alignment of the models with the cadastral map allows there georeferencing. The system thus constructed is a primer for the design of a Volunteered 3D+T GIS for Reims citizens to capture the history of their city
Martinez, Jabier. "Exploration des variantes d'artefacts logiciels pour une analyse et une migration vers des lignes de produits." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066344/document.
Full textSoftware Product Lines (SPLs) enable the derivation of a family of products based on variability management techniques. Inspired by the manufacturing industry, SPLs use feature configurations to satisfy different customer needs, along with reusable assets to allow systematic reuse. Capitalizing on existing variants by extracting the common and varying elements is referred to as extractive approaches for SPL adoption. Feature identification is needed to analyse the domain variability. Also, to identify the associated implementation elements of the features, their location is needed. In addition, feature constraints should be identified to guarantee that customers are not able to select invalid feature combinations. Then, the reusable assets associated to the features should be constructed. And finally, a comprehensive feature model need to be synthesized. This dissertation presents Bottom-Up Technologies for Reuse (BUT4Reuse), a unified, generic and extensible framework for mining software artefact variants. Special attention is paid to model-driven development scenarios. We also focus on benchmarks and in the analysis of variants, in particular, in benchmarking feature location techniques and in identifying families of variants in the wild for experimenting with feature identification techniques. We present visualisation paradigms to support domain experts on feature naming and to support on feature constraints discovery. Finally, we investigate and discuss the mining of artefact variants for SPL analysis once the SPL is already operational. Concretely, we present an approach to find relevant variants within the SPL configuration space guided by end user assessments
Péroumal, Armelle. "Caractérisation des fruits et de la pulpe de six accessions de Mammea americana : Aptitude à la transformation des fruits et caractérisation des composés phénoliques de la pulpe." Thesis, Antilles-Guyane, 2014. http://www.theses.fr/2014AGUY0702/document.
Full textOur work focuses on the physical and chemical properties of six mamey apple cultivars in order to select elite cultivars suitable for food processing or as table fruit. The antioxidant activity of the fruit pulp, the identification and quantification of the polyphenols responsible for it, and ultrasound assisted extraction method were also investigated.According to our results, the postharvest routes for every cultivar could be different. Pavé 11, Lézarde and Ti Jacques were found to be good for consumption, giving sweeter fruits with high total phenolic and carotenoid contents. Sonson, pavé 11 and Lézarde had suitable characteristics for the manufacturing of mamey products. The polyphenolic composition of the pulp determined by HPLC-DAD and UPLC-MS showed the presence of phenolic acids, condensed tannins, flavonols and flavanols. The results of the antioxidant test (DPPH and ORAC) point out that the most antioxidant cultivar was Ti Jacques. The design and optimization of the ultrasound assisted extraction method has done for polyphenols extraction. The results showed that the polyphenols rich extract contains the same content of phenolic acids and flavonols in comparison to the conventional method. Additionally, the dry extract obtained with a “green” solvent, had good organoleptic properties
Jauréguy, Maïté. "Étude de la diffraction par impulsions électromagnétiques très courtes d'objets en espace libre ou enfouis : modélisation numérique et extraction des paramètres caractéristiques." Toulouse, ENSAE, 1995. http://www.theses.fr/1995ESAE0015.
Full textMeziani, Mohamed Aymen. "Estimation paramétrique et non-paramétrique en utilisant une approche de régression quantile." Electronic Thesis or Diss., Paris Est, 2019. http://www.theses.fr/2019PESC0084.
Full textThe quantile periodogram developed by Li(2012) is a new approach that provides an extended and richer breadth of information compared to the ordinary periodogram. However, it suffers from unstable performances where multiple peak frequencies are present because the spectral leak produces extra small spikes. To alleviate this issue, a regularised version of the quantile periodogram is proposed. The asymptotic properties of the new spectral estimator are developed and extensive simulations were performed to show the effectiveness of the proposed estimation in terms of detection of hidden periodicities under different types of noise.A first application of the proposed approach was conducted in a framework of EEG (electroencephalogram) signal study. EEG signals are known for their non-stationarity and non-linearity. The newly proposed estimator was used along with other spectral estimators as feature extraction methods. These features are subsequently fed to classifiers to determine if the signal belongs to one of the motor imagery classes.The second application is a work that has been done in the framework of the European MEDOLUTION project for the study of accelerometer signals. The regularised quantile periodogram as well as other spectral estimators were applied for the classification of self-rehabilitation movements.The results suggest that the regularised quantile periodogram is a promising and robust method for detecting hidden periodicities, especially in cases of non-stationarity and non-linearity of the data, as well as for improving classification performance in the context of EEG and accelerometer signals
Caparos, Matthieu. "Analyse automatique des crises d'épilepsie du lobe temporal à partir des EEG de surface." Phd thesis, Institut National Polytechnique de Lorraine - INPL, 2006. http://tel.archives-ouvertes.fr/tel-00118993.
Full textDes travaux récents validés en stéréoélectroencéphalographie (SEEG) ont démontré une évolution des synchronisations entre structures cérébrales permettant une caractérisation de la dynamique des crises du lobe temporal.
L'originalité des travaux consiste à étendre les méthodes développées en SEEG, à l'étude des signaux EEG de surface. Du point de vue médical, ce travail s'inscrit dans le cadre de l'aide au diagnostic préchirugical.
Des méthodes de mesure de relation, telles que la cohérence, la Directed Transfer Function (DTF), la corrélation linéaire (r²) ou la corrélation non-linéaire (h²), ont été adaptées pour répondre à cette problématique. Différents critères, définis à partir d'indications cliniques, ont permis la mise en évidence des avantages du coefficient de corrélation non-linéaire dans l'étude de l'épilepsie par les EEG de surface.
L'exploitation de l'évolution du coefficient de corrélation non-linéaire est à la base de trois applications de traitement automatique du signal EEG :
– La première est la détermination de la latéralisation de la ZE au départ d'une crise. Cette information constitue l'étape préliminaire lors de la recherche de la localisation de la ZE.
– La recherche d'une signature épileptique constitue la seconde application. La signature est extraite par un algorithme de mise en correspondance et de mesure de similarités en intra-patients.
– Une classification des crises du lobe temporal constitue la troisième application. Elle est réalisée en extrayant un ensemble de caractéristiques des signatures trouvées par l'algorithme de l'étape 2.
La base de données qui contient quarante-trois patients et quatre-vingt-sept crises (deux crises par patient, trois pour l'un d'entre eux) garantit une certaine significativité statistique.
En ce qui concerne les résultats, un taux de bonne latéralisation de l'ordre de 88% est obtenu. Ce taux est très intéressant, car dans la littérature, il peut être quelques fois atteint, mais en exploitant des données multimodalités et avec des méthodes non-automatiques. A l'issue de la classification, 85% des crises mésiales ont été correctement classifiées ainsi que 58% des crises mésio-latérales.
Hajri, Souhail. "Modélisation des surfaces rocheuses naturelles à partir d'une scannerisation laser 3D et extraction automatique de formes caractéristiques : applications aux spéléothèmes et surfaces géologiques." Grenoble, 2010. http://www.theses.fr/2010CHAMS039.
Full textThe research work presented in this dissertation concems 3d image processing. We are interested in the automation's tasks of the extraction and the characterization of reliefs fomms in the naturel environment from 3D point clouds acquired by LIDAR. Once thèse data are reconstructed as triangular meshes or TIN models (Triangular Irregular Networks), we are particularly interest in the 3D TIN model segmentation that is one of the essentiel stops of the pattern recognition process. The goal of segmentation is to décompose the TIN model into homogeneous régions with common characteristics that correspond to significant geological objects. However, the images to be processed are relatively complex (natural fomms), and thus req ui red a priori knowledge. Th us, we have initial ly proposed a method for interactive segmentation based on knowledge of the operator. The method involves manually marking the regions of interest in the models to extract the desired geological fomms. This approach is based on the watershed method. Later, a second segmentation solution, more automated is proposed. This solution is focused on two objects which we know perfectly its discriminating features: planar discontinuities and stalagmites. The identification and characterization process of planes discontinuities is based on the unsupervised clustering algorithm named DBSCAN which can automatically extract parameters related to the discontinuities of rock surfaces: orientation, spacing, roughness. . . The second approach, which aims the automatic identification and characterization, is based on ellipse fitting
Loiselle, Stéphane. "Traitement bio-inspiré de la parole pour système de reconnaissance vocale." Thèse, Université de Sherbrooke, 2010. http://savoirs.usherbrooke.ca/handle/11143/1952.
Full textMartinez, Jabier. "Exploration des variantes d'artefacts logiciels pour une analyse et une migration vers des lignes de produits." Electronic Thesis or Diss., Paris 6, 2016. http://www.theses.fr/2016PA066344.
Full textSoftware Product Lines (SPLs) enable the derivation of a family of products based on variability management techniques. Inspired by the manufacturing industry, SPLs use feature configurations to satisfy different customer needs, along with reusable assets to allow systematic reuse. Capitalizing on existing variants by extracting the common and varying elements is referred to as extractive approaches for SPL adoption. Feature identification is needed to analyse the domain variability. Also, to identify the associated implementation elements of the features, their location is needed. In addition, feature constraints should be identified to guarantee that customers are not able to select invalid feature combinations. Then, the reusable assets associated to the features should be constructed. And finally, a comprehensive feature model need to be synthesized. This dissertation presents Bottom-Up Technologies for Reuse (BUT4Reuse), a unified, generic and extensible framework for mining software artefact variants. Special attention is paid to model-driven development scenarios. We also focus on benchmarks and in the analysis of variants, in particular, in benchmarking feature location techniques and in identifying families of variants in the wild for experimenting with feature identification techniques. We present visualisation paradigms to support domain experts on feature naming and to support on feature constraints discovery. Finally, we investigate and discuss the mining of artefact variants for SPL analysis once the SPL is already operational. Concretely, we present an approach to find relevant variants within the SPL configuration space guided by end user assessments
Nguyen, Thanh-Khoa. "Image segmentation and extraction based on pixel communities." Thesis, La Rochelle, 2019. http://www.theses.fr/2019LAROS035.
Full textImage segmentation has become an indispensable task that is widely employed in several image processing applications including object detection, object tracking, automatic driver assistance, and traffic control systems, etc. The literature abounds with algorithms for achieving image segmentation tasks. These methods can be divided into some main groups according to the underlying approaches, such as Region-based image segmentation, Feature-based clustering, Graph-based approaches and Artificial Neural Network-based image segmentation. Recently, complex networks have mushroomed both theories and applications as a trend of developments. Hence, image segmentation techniques based on community detection algorithms have been proposed and have become an interesting discipline in the literature. In this thesis, we propose a novel framework for community detection based image segmentation. The idea that brings social networks analysis domain into image segmentation quite satisfies with most authors and harmony in those researches. However, how community detection algorithms can be applied in image segmentation efficiently is a topic that has challenged researchers for decades. The contribution of this thesis is an effort to construct best complex networks for applying community detection and proposal novel agglomerate methods in order to aggregate homogeneous regions producing good image segmentation results. Besides, we also propose a content based image retrieval system using the same features than the ones obtained by the image segmentation processes. The proposed image search engine for real images can implement to search the closest similarity images with query image. This content based image retrieval relies on the incorporation of our extracted features into Bag-of-Visual-Words model. This is one of representative applications denoted that image segmentation benefits several image processing and computer visions applications. Our methods have been tested on several data sets and evaluated by many well-known segmentation evaluation metrics. The proposed methods produce efficient image segmentation results compared to the state of the art
Moukadem, Ali. "Segmentation et classification des signaux non-stationnaires : application au traitement des sons cardiaque et à l'aide au diagnostic." Phd thesis, Université de Haute Alsace - Mulhouse, 2011. http://tel.archives-ouvertes.fr/tel-00713820.
Full textLiang, Ke. "Oculométrie Numérique Economique : modèle d'apparence et apprentissage par variétés." Thesis, Paris, EPHE, 2015. http://www.theses.fr/2015EPHE3020/document.
Full textGaze tracker offers a powerful tool for diverse study fields, in particular eye movement analysis. In this thesis, we present a new appearance-based real-time gaze tracking system with only a remote webcam and without infra-red illumination. Our proposed gaze tracking model has four components: eye localization, eye feature extraction, eye manifold learning and gaze estimation. Our research focuses on the development of methods on each component of the system. Firstly, we propose a hybrid method to localize in real time the eye region in the frames captured by the webcam. The eye can be detected by Active Shape Model and EyeMap in the first frame where eye occurs. Then the eye can be tracked through a stochastic method, particle filter. Secondly, we employ the Center-Symmetric Local Binary Patterns for the detected eye region, which has been divided into blocs, in order to get the eye features. Thirdly, we introduce manifold learning technique, such as Laplacian Eigen-maps, to learn different eye movements by a set of eye images collected. This unsupervised learning helps to construct an automatic and correct calibration phase. In the end, as for the gaze estimation, we propose two models: a semi-supervised Gaussian Process Regression prediction model to estimate the coordinates of eye direction; and a prediction model by spectral clustering to classify different eye movements. Our system with 5-points calibration can not only reduce the run-time cost, but also estimate the gaze accurately. Our experimental results show that our gaze tracking model has less constraints from the hardware settings and it can be applied efficiently in different real-time applications
Leveau, Valentin. "Représentations d'images basées sur un principe de voisins partagés pour la classification fine." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT257/document.
Full textThis thesis focuses on the issue of fine-grained classification which is a particular classification task where classes may be visually distinguishable only from subtle localized details and where background often acts as a source of noise. This work is mainly motivated by the need to devise finer image representations to address such fine-grained classification tasks by encoding enough localized discriminant information such as spatial arrangement of local features.To this aim, the main research line we investigate in this work relies on spatially localized similarities between images computed thanks to efficient approximate nearest neighbor search techniques and localized parametric geometry. The main originality of our approach is to embed such spatially consistent localized similarities into a high-dimensional global image representation that preserves the spatial arrangement of the fine-grained visual patterns (contrary to traditional encoding methods such as BoW, Fisher or VLAD Vectors). In a nutshell, this is done by considering all raw patches of the training set as a large visual vocabulary and by explicitly encoding their similarity to the query image. In more details:The first contribution proposed in this work is a classification scheme based on a spatially consistent k-nn classifier that relies on pooling similarity scores between local features of the query and those of the similar retrieved images in the vocabulary set. As this set can be composed of a lot of local descriptors, we propose to scale up our approach by using approximate k-nearest neighbors search methods. Then, the main contribution of this work is a new aggregation-based explicit embedding derived from a newly introduced match kernel based on shared nearest neighbors of localized feature vectors combined with local geometric constraints. The originality of this new similarity-based representation space is that it directly integrates spatially localized geometric information in the aggregation process.Finally, as a third contribution, we proposed a strategy to drastically reduce, by up to two orders of magnitude, the high-dimensionality of the previously introduced over-complete image representation while still providing competitive image classification performance.We validated our approaches by conducting a series of experiments on several classification tasks involving rigid objects such as FlickrsLogos32 or Vehicles29 but also on tasks involving finer visual knowledge such as FGVC-Aircrafts, Oxford-Flower102 or CUB-Birds200. We also demonstrated significant results on fine-grained audio classification tasks such as the LifeCLEF 2015 bird species identification challenge by proposing a temporal extension of our image representation. Finally, we notably showed that our dimensionality reduction technique used on top of our representation resulted in highly interpretable visual vocabulary composed of the most representative image regions for different visual concepts of the training base
Pan, Xiaoxi. "Towards FDG-PET image characterization and classification : application to Alzheimer's disease computer-aided diagnosis." Thesis, Ecole centrale de Marseille, 2019. http://www.theses.fr/2019ECDM0008.
Full textAlzheimer's disease (AD) is becoming the dominant type of neurodegenerative brain disease in elderly people, which is incurable and irreversible for now. It is expected to diagnose its early stage, Mild Cognitive Impairment (MCI), then interventions can be applied to delay the onset. Fluorodeoxyglucose positron emission tomography (FDG-PET) is considered as a significant and effective modality to diagnose AD and the corresponding early phase since it can capture metabolic changes in the brain thereby indicating abnormal regions. Therefore, this thesis is devoted to identify AD from Normal Control (NC) and predict MCI conversion under FDG-PET modality. For this purpose, three independent novel methods are proposed. The first method focuses on developing connectivities among anatomical regions involved in FDG-PET images which are rarely addressed in previous methods. Such connectivities are represented by either similarities or graph measures among regions. Then combined with each region's properties, these features are fed into a designed ensemble classification framework to tackle problems of AD diagnosis and MCI conversion prediction. The second method investigates features to characterize FDG-PET images from the view of spatial gradients, which can link the commonly used features, voxel-wise and region-wise features. The spatial gradient is quantified by a 2D histogram of orientation and expressed in a multiscale manner. The results are given by integrating different scales of spatial gradients within different regions. The third method applies Convolutional Neural Network (CNN) techniques to three views of FDG-PET data, thereby designing the main multiview CNN architecture. Such an architecture can facilitate convolutional operations, from 3D to 2D, and meanwhile consider spatial relations, which is benefited from a novel mapping layer with cuboid convolution kernels. Then three views are combined and make a decision jointly. Experiments conducted on public dataset show that the three proposed methods can achieve significant performance and moreover, outperform most state-of-the-art approaches
Barakat, Mustapha. "Fault diagnostic to adaptive classification schemes based on signal processing and using neural networks." Le Havre, 2011. http://www.theses.fr/2011LEHA0023.
Full textIndustrial Fault Detection and Isolation (FDI) become more essential in light of increased automation in industry. The signifiant increase of systemes and plants complexity during last decades made the FDI tasks appear as major steps in all industrial processes. In this thesis, adaptive intelligent tcehniques based on artificial neural networks combined with advanced signal processing methods for systematic detection and diagnosis of faults in industrial systemes are developed and put forward? The proposed on-line classification techniques consists of three main stages : (1) signal modeling and featured extraction, (2) feature classification and (3) output decision. In first stage, our approach is relied on the assumption that faults are reflected in the extracted features. For feature classification algorithm, several techniques bases on neural networks are proposed. A binary decision tree relied on multiclass Support Vector Machine (SVM) algorithm is put forward. The technique selects dynamic appropriate feature at each level (branch) and classify it in a binary classifier. Another advance classification technique is anticipated based on mapping algorithm network that can extract features from historical data and require prior knowledge about the process. The signifiance of this network focuses on its ability to reserve old data in equitable porpabilities during the mapping process. Each class of faults or disturbances will be represented by a particular surface at the end of the mapping process. Third contribution focuses on building network with nodes that can activate in specific subspaces of different classes. The concept behind this method is to divide the pattern space of faults, in a particular sub-space, a special diagnosis agent is trained. An advanced parameter selection is embedded in this algorithm for improving the confidence of classification diagnosis. All contributions are applied to the fault detection and diagnosis of various industrial systems in the domains of mechanical or chemical engineering. The performances of our approaches are studied and compared with several existing neural network methods and the accuracy of all methodologies is considered carefully and evaluated
Christoff, Vesselinova Nicole. "Détection et caractérisation d'attributs géométriques sur les corps rocheux du système solaire." Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0565/document.
Full textOne of the challenges of planetary science is the age determination of the surfaces of the different celestial bodies in the solar system, to understand their formation and evolution processes. An approach relies on the analysis of the crater impact density and size. Due to the huge quantity of data to process, automatic approaches have been proposed for automatically detecting impact craters in order to facilitate this dating process. They generally use the color values from images or the elevation values from Digital Elevation Model (DEM). In this PhD thesis, we propose a new approach for detecting craters rims. The main idea is to combine curvature analysis with Neural Network based classification. This approach contains two main steps: first, each vertex of the mesh is labeled with the value of the minimal curvature; second, this curvature map is injected into a neural network to automatically detect the shapes of interest. The results show that detecting forms are more efficient using a two-dimensional map based on the computation of discrete differential estimators, than by the value of the elevation at each vertex. This approach significantly reduces the number of false negatives compared to previous approaches based on topographic information only. The validation of the method is performed on DEMs of Mars, acquired by a laser altimeter aboard NASA’s Mars Global Surveyor spacecraft and combined with a database of manually identified craters
Commandeur, Frédéric. "Fusion d'images multimodales pour la caractérisation du cancer de la prostate." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S038/document.
Full textThis thesis concerns the prostate cancer characterization based on multimodal imaging data. The purpose is to identify and characterize the tumors using in-vivo observations including mMRI and PET/CT, with a biological reference obtained from anatomopathological analysis of radical prostatectomy specimen providing histological slices. Firstly, we propose two registration methods to match the multimodal images in the the spatial reference defined by MRI. The first algorithm aims at aligning PET/CT images with MRI by combining contours information and presence probability of the prostate. The objective of the second is to register the histological slices with the MRI. Based on the Stanford protocol, a thinner cutting of the radical prostatectomy specimen is done providing more slices compared to clinical routine. The correspondance between histological and MRI slices is then estimated using a combination of the prior information of the slicing and salient points (SURF) extracted in both modalities. This initialization step allows for an affine and non-rigid registration based on mutual information and intraprostatic structures distance map. Secondly, structural (Haar, Garbor, etc) and functional (Ktrans, Kep, SUV, TLG, etc) descriptors are extracted for each prostate voxel over MRI and PET images. Corresponding biological labels obtained from the anatomopathological analysis are associated to the features vectors. The biological labels are composed by the Gleason score providing an information of aggressiveness and immunohistochemistry grades providing a quantification of biological process such as hypoxia and cell growth. Finally, these pairs (features vectors/biological information) are used as training data to build RF and SVM classifiers to characterize tumors from new in-vivo observations. In this work, we perform a feasibility study with nine patients
Kodewitz, Andreas. "Méthodes pour l'analyse de grands volumes d'images appliquées à la détection précoce de la maladie d'Alzheimer par analyse de PDG-PET scans." Phd thesis, Université d'Evry-Val d'Essonne, 2013. http://tel.archives-ouvertes.fr/tel-00846689.
Full textBai, Cong. "Analyse d'images pour une recherche d'images basée contenu dans le domaine transformé." Phd thesis, INSA de Rennes, 2013. http://tel.archives-ouvertes.fr/tel-00907290.
Full textBayle, Elodie. "Entre fusion et rivalité binoculaires : impact des caractéristiques des stimuli visuels lors de l’utilisation d’un système de réalité augmentée semi-transparent monoculaire." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG029.
Full textMonocular augmented reality devices are used in the aeronautical field to enhance pilots' vision by providing access to essential information such as flight symbology. They are lighter and more adjustable than their binocular counterparts, can be integrated into any aircraft, and allow information to be retained regardless of gaze direction. However, they generate a particular perception since a monocular virtual image is superimposed on the real binocular environment. Different information is projected to corresponding regions of the two eyes creating an interocular conflict. The goal of this thesis is to evaluate the impact of the stimuli characteristics on the performance of tasks performed with this type of system to optimize its use. Two psychophysical studies and an ecological study in a flight simulator have been carried out. All of them showed a good comfort when exposed to interocular conflict. The performances were evaluated according to the characteristics of the binocular background, the display of the monocular image and the characteristics of events to be detected. The choice of the presenting eye is not insignificant given the differences between the performances achieved with the monocular on each of the two eyes. Our results from the three studies also show that, as with two fusible or two dichoptic images, performance is dependent on visual stimuli. They therefore suggest that an adaptive symbology should be considered, which cannot be summarized by the change in brightness currently available to pilots
Gutierrez, Luis Felipe. "Extraction et caractéristiques des huiles de l'argousier (Hippophaë rhamnoides L.). Une étude des effets de la méthode de déshydratation des fruits sur le rendement d'extraction et la qualité des huiles." Thesis, Université Laval, 2007. http://www.theses.ulaval.ca/2007/24426/24426.pdf.
Full textSea buckthorn (Hippophaë rhamnoides L.) seeds and pulp oils have been recognized for their nutraceutical properties. The effects of air-drying and freeze-drying on extraction yields and quality of oils from sea buckthorn (cv. Indian-Summer) seeds and pulp were studied. Oil extractions were carried out using hexane. Air-dried (ADS) and freeze-dried (FDS) seeds, gave a similar extraction yields (∼12%w/w), whereas those of air-dried (ADP) and freeze-dried (FDP) pulps were significantly different (35.9±0.8 vs. 17.1±0.6%w/w). Fatty acid analysis revealed that α-linolenic (37.2-39.6%), linoleic (32.4-34.2%) and oleic (13.1%) acids were the main fatty acids in seed oils, while pulp oils were rich in palmitoleic (39.9%), palmitic (35.4%) and linoleic (10.6%) acids. Lipid fractionation of crude oils, obtained by solid phase extraction, yielded mainly neutral lipids (93.9-95.8%). The peroxide values of seed and pulp oils were circa 1.8 and between 3.0-5.4 meq/kg respectively. The melting behavior of oils was characterized by differential scanning calorimetry.