Dissertations / Theses on the topic 'Traitement de larges images'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Traitement de larges images.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
González, Obando Daniel Felipe. "From digital to computational pathology for biomarker discovery." Electronic Thesis or Diss., Université Paris Cité, 2019. http://www.theses.fr/2019UNIP5185.
Full textHistopathology aims to analyze images of biological tissues to assess the pathologi¬cal condition of an organ and to provide a diagnosis. The advent of high-resolution slide scanners has opened the door to new possibilities for acquiring very large im¬ages (whole slide imaging), multiplexing stainings, exhaustive extraction of visual information and large scale annotations. This thesis proposes a set of algorith¬mic methods aimed at facilitating and optimizing these different aspects. First, we propose a multi-scale registration method of multi-labeled histological images based on the properties of B-splines to model, in a continuous way, a discrete image. We then propose new approaches to perform morphological analysis on weakly simple polygons generalized by straight-line graphs. They are based on the formalism of straight skeletons (an approximation of curved skeletons defined by straight segments), built with the help of motorcycle graphs. This structure makes it possible to perform mathematical morphological operations on polygons. The precision of operations on noisy polygons is obtained by refining the construction of straight skeletons. We also propose an algorithm for computing the medial axis from straight skeletons, showing it is possible to approximate the original polygonal shape. Finally, we explore weighted straight skeletons that allow directional mor¬phological operations. These morphological analysis approaches provide consistent support for improving the segmentation of objects through contextual information and performing studies related to the spatial analysis of interactions between dif¬ferent structures of interest within the tissue. All the proposed algorithms are optimized to handle gigapixel images while assuring analysis reproducibility, in particular thanks to the creation of the Icytomine plugin, an interface between Icy and Cytomine
Paulin, Mattis. "De l'apprentissage de représentations visuelles robustes aux invariances pour la classification et la recherche d'images." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM007/document.
Full textThis dissertation focuses on designing image recognition systems which are robust to geometric variability. Image understanding is a difficult problem, as images are two-dimensional projections of 3D objects, and representations that must fall into the same category, for instance objects of the same class in classification can display significant differences. Our goal is to make systems robust to the right amount of deformations, this amount being automatically determined from data. Our contributions are twofolds. We show how to use virtual examples to enforce robustness in image classification systems and we propose a framework to learn robust low-level descriptors for image retrieval. We first focus on virtual examples, as transformation of real ones. One image generates a set of descriptors –one for each transformation– and we show that data augmentation, ie considering them all as iid samples, is the best performing method to use them, provided a voting stage with the transformed descriptors is conducted at test time. Because transformations have various levels of information, can be redundant, and can even be harmful to performance, we propose a new algorithm able to select a set of transformations, while maximizing classification accuracy. We show that a small amount of transformations is enough to considerably improve performance for this task. We also show how virtual examples can replace real ones for a reduced annotation cost. We report good performance on standard fine-grained classification datasets. In a second part, we aim at improving the local region descriptors used in image retrieval and in particular to propose an alternative to the popular SIFT descriptor. We propose new convolutional descriptors, called patch-CKN, which are learned without supervision. We introduce a linked patch- and image-retrieval dataset based on structure from motion of web-crawled images, and design a method to accurately test the performance of local descriptors at patch and image levels. Our approach outperforms both SIFT and all tested approaches with convolutional architectures on our patch and image benchmarks, as well as several styate-of-theart datasets
Villéger, Emmanuel. "Constance de largeur et désocclusion dans les images digitales." Phd thesis, Université de Nice Sophia-Antipolis, 2005. http://tel.archives-ouvertes.fr/tel-00011229.
Full textnous regroupons des points lumineux et/ou des objets selon certaines
règles pour former des objets plus gros, des Gestalts.
La première partie de cette thèse est consacrée à la constance de
largeur. La Gestalt constance de largeur regroupe des points situés
entre deux bords qui restent parallèles. Nous cherchons donc dans les
images des courbes ``parallèles.'' Nous voulons faire une détection
a contrario, nous proposons donc une quantification du ``non
parallélisme'' de deux courbes par trois méthodes. La première méthode
utilise un modèle de génération de courbes régulières et nous
calculons une probabilité. La deuxième méthode est une méthode de
simulation de type Monte-Carlo pour estimer cette probabilité. Enfin
la troisième méthode correspond à un développement limité de la
première en faisant tendre un paramètre vers 0 sous certaines
contraintes. Ceci conduit à une équation aux dérivées partielles
(EDP). Parmi ces trois méthodes la méthode de type Monte-Carlo est
plus robuste et plus rapide.
L'EDP obtenue est très similaire à celles utilisées pour la
désocclusion d'images. C'est pourquoi dans la deuxième partie de cette
thèse nous nous intéressons au problème de la désocclusion. Nous
présentons les méthodes existantes puis une nouvelle méthode basée sur
un système de deux EDPs dont l'une est inspirée de celle de la
première partie. Nous introduisons la probabilité de l'orientation du
gradient de l'image. Nous prenons ainsi en compte l'incertitude sur
l'orientation calculée du gradient de l'image. Cette incertitude est
quantifiée en relation avec la norme du gradient.
Avec la quantification du non parallélisme de deux courbes, l'étape
suivante est la détection de la constance de largeur dans
les images. Il faut alors définir un seuil pour sélectionner les
bonnes réponses du détecteur et surtout parmi les réponses définir
des réponses ``maximales.'' Le système d'EDPs pour
la désocclusion dépend de beaucoup de paramètres, il faut trouver une
méthode de calibration des paramètres pour obtenir de bons résultats
adaptés à chaque image.
Soltani, Mariem. "Partitionnement des images hyperspectrales de grande dimension spatiale par propagation d'affinité." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S099/document.
Full textThe interest in hyperspectral image data has been constantly increasing during the last years. Indeed, hyperspectral images provide more detailed information about the spectral properties of a scene and allow a more precise discrimination of objects than traditional color images or even multispectral images. High spatial and spectral resolutions of hyperspectral images enable to precisely characterize the information pixel content. Though the potentialities of hyperspectral technology appear to be relatively wide, the analysis and the treatment of these data remain complex. In fact, exploiting such large data sets presents a great challenge. In this thesis, we are mainly interested in the reduction and partitioning of hyperspectral images of high spatial dimension. The proposed approach consists essentially of two steps: features extraction and classification of pixels of an image. A new approach for features extraction based on spatial and spectral tri-occurrences matrices defined on cubic neighborhoods is proposed. A comparative study shows the discrimination power of these new features over conventional ones as well as spectral signatures. Concerning the classification step, we are mainly interested in this thesis to the unsupervised and non-parametric classification approach because it has several advantages: no a priori knowledge, image partitioning for any application domain, and adaptability to the image information content. A comparative study of the most well-known semi-supervised (knowledge of number of classes) and unsupervised non-parametric methods (K-means, FCM, ISODATA, AP) showed the superiority of affinity propagation (AP). Despite its high correct classification rate, affinity propagation has two major drawbacks. Firstly, the number of classes is over-estimated when the preference parameter p value is initialized as the median value of the similarity matrix. Secondly, the partitioning of large size hyperspectral images is hampered by its quadratic computational complexity. Therefore, its application to this data type remains impossible. To overcome these two drawbacks, we propose an approach which consists of reducing the number of pixels to be classified before the application of AP by automatically grouping data points with high similarity. We also introduce a step to optimize the preference parameter value by maximizing a criterion related to the interclass variance, in order to correctly estimate the number of classes. The proposed approach was successfully applied on synthetic images, mono-component and multi-component and showed a consistent discrimination of obtained classes. It was also successfully applied and compared on hyperspectral images of high spatial dimension (1000 × 1000 pixels × 62 bands) in the context of a real application for the detection of invasive and non-invasive vegetation species
Assadzadeh, Djafar. "Traitement des images échographiques." Grenoble 2 : ANRT, 1986. http://catalogue.bnf.fr/ark:/12148/cb37595555x.
Full textAssadzadeh, Djafar. "Traitement des images échographiques." Paris 13, 1986. http://www.theses.fr/1986PA132013.
Full textFdida, Nicolas. "Développement d'un système de granulométrie par imagerie : application aux sprays larges et hétérogènes." Rouen, 2008. http://www.theses.fr/2008ROUES050.
Full textIn many industrial applications, a given mass of a liquid is sprayed by an injector in a carrier gas in order to optimize the combustion by increasing the liquid-gas interface area. The characteristics of a spray are often given by the measurement of the drop size distribution. The underlying hypothesis is that all liquid elements are spherical. Of course, this case is not the rule and could only occur at end of the evolution of the spray. We develop in this study a shadow imaging system to measure the drop size independently from the drop shapes. A calibration procedure is described, based on an imaging model developed in our laboratory. This model takes into account image parameters of the drop to measure his size and to estimate his level of defocus. The goal of this calibration procedure is to define the measurement volume of the imaging system. A tool based on the characterization of the shape of the drops is proposed. Morphological criteria are defined to classify droplets, which belong to different kind of shape families such as spherical, elliptical and Cassini oval families. The introduction of the Cassini oval family shows a better description of liquid elements during the atomization process. This original approach underlines a segmentation of the shapes between ligaments, spherical droplets or ovoids. The velocity of the droplets is also investigated with this imaging system. For that purpose, a method of Particle Tracking Velocimetry (PTV) has been developed. It consists in matching pairs of droplets in a couple of images recorded at two successive times. The imaging system have been used to characterize gasoline sprays produced gasoline injectors of indirect and direct injection types. The drop size is compared with those given by two other drop sizing techniques : a phase Doppler anemometer and a laser diffraction granulometer. Attention was paid on the differences in the measurement volumes of the different techniques in order to compare the drop sizes given by each technique
Vitter, Maxime. "Cartographier l'occupation du sol à grande échelle : optimisation de la photo-interprétation par segmentation d'image." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSES011/document.
Full textOver the last fifteen years, the emergence of remote sensing data at Very High Spatial Resolution (VHRS) and the democratization of Geographic Information Systems (GIS) have helped to meet the new and growing needs for spatial information. The development of new mapping methods offers an opportunity to understand and anticipate land cover change at large scales, still poorly known. In France, spatial databases about land cover and land use at large scale have become an essential part of current planning and monitoring of territories. However, the acquisition of this type of database is still a difficult need to satisfy because the demands concern tailor-made cartographic productions, adapted to the local problems of the territories. Faced with this growing demand, regular service providers of this type of data seek to optimize manufacturing processes with recent image-processing techniques. However, photo interpretation remains the favoured method of providers. Due to its great flexibility, it still meets the need for mapping at large scale, despite its high cost. Using fully automated production methods to substitute for photo interpretation is rarely considered. Nevertheless, recent developments in image segmentation can contribute to the optimization of photo-interpretation practice. This thesis presents a series of tools that participate in the development of digitalization assistance for the photo-interpretation exercise. The assistance results in the realization of a pre-cutting of the landscape from a segmentation carried out on a VHRS image. Tools development is carried out through three large-scale cartographic services, each with different production instructions, and commissioned by public entities. The contribution of these automation tools is analysed through a comparative analysis between two mapping procedures: manual photo interpretation versus digitally assisted segmentation. The productivity gains brought by segmentation are evaluated using quantitative and qualitative indices on different landscape configurations. To varying degrees, it appears that whatever type of landscape is mapped, the gains associated with assisted mapping are substantial. These gains are discussed both technically and thematically from a commercial perspective
Atnafu, Besufekad Solomon. "Modélisation et traitement de requêtes images complexes." Lyon, INSA, 2003. http://theses.insa-lyon.fr/publication/2003ISAL0032/these.pdf.
Full textQuerying images by their contents is of increasing importance in many application domains. For this reason, many researches are conducted in the two fields: DBMS and pattern recognition. But, the majority of the existing systems do not offer any formal framework to integrate the techniques in the two fields for effective multi-criteria queries on images. However, having a formal framework is necessary if one intends to treat such types of image queries effectively. In this thesis, we present image data repository models that allow to formulate multi-criteria queries. Then, based on this model, we introduced a similarity-based algebra that enables to formulate similarity-based queries and multi-criteria queries effectively and presented properties of the operators. We also presented techniques for similarity-based query optimization. We then developed a prototype called EMIMS (Extended Medical Image Management System) to show that our proposals in practical applications
Duclos, Pierre. "Étude du parallélisme en traitement des images." Nice, 1988. http://www.theses.fr/1988NICE4209.
Full textAtnafu, Besufekad Solomon Brunie Lionel. "Modélisation et traitement de requêtes images complexes." Villeurbanne : Doc'INSA, 2004. http://docinsa.insa-lyon.fr/these/pont.php?id=atnafu_besufekad.
Full textDumas, de la Roque Eric. "Traitement arthroscopique des larges ruptures de la coiffe des rotateurs de l'épaule : à propos de 26 cas." Bordeaux 2, 1992. http://www.theses.fr/1992BOR2M149.
Full textHachicha, Walid. "Traitement, codage et évaluation de la qualité d’images stéréoscopiques." Thesis, Paris 13, 2014. http://www.theses.fr/2014PA132037.
Full textRecent developments in 3D stereoscopic technology have opened new horizons in many application fields such as 3DTV, 3D cinema, video games and videoconferencing and at the same time raised a number of challenges related to the processing and coding of 3D data. Today, stereoscopic imaging technology is becoming widely used in many fields. There are still some problems related to the physical limitations of image acquisition systems, e.g. transmission and storage requirements. The objective of this thesis is the development of methods for improving the main steps of stereoscopic imaging pipeline such as enhancement, coding and quality assessment. The first part of this work addresses quality issues including contrast enhancement and quality assessment of stereoscopic images. Three algorithms have been proposed. The first algorithm deals with the contrast enhancement aiming at promoting the local contrast guided by calculated/estimated object importance map in the visual scene. The second and the third algorithms aim at predicting the distortion severity of stereo images. In the second one, we have proposed a fullreference metric that requires the reference image and is based on some 2D and 3D findings such as amplitude non-linearity, contrast sensitivity, frequency and directional selectivity, and binocular just noticeable difference model. While in the third algorithm, we have proposed a no-reference metric which needs only the stereo pair to predict its quality. The latter is based on Natural Scene statistics to identify the distortion affecting the stereo image. The statistic 3D features consist in combining features extracted from the natural stereo pair and those from the estimate disparity map. To this end, a joint wavelet transform, inspired from the vector lifting concept is first employed. Then, the features are extracted from the obtained subbands. The second part of this dissertation addresses stereoscopic image compression issues. We started by investigating a one-dimensional directional discrete cosine transform to encode the disparity compensated residual image. Afterwards, and based on the wavelet transform, we investigated two techniques for optimizing the computation of the residual image. Finally, we present efficient bit allocation methods for stereo image coding purpose. Generally, the bit allocation problem is solved in an empirical manner by looking for the optimal rates leading to the minimum distortion value. Thanks to recently published work on approximations of the entropy and distortion functions, we proposed accurate and fast bit allocation schemes appropriate for the open-loop and closed-loop based stereo coding structures
Abou, Chakra Sara. "La Boucle Locale Radio et la Démodulation directe de signaux larges bandes à 26GHz." Phd thesis, Télécom ParisTech, 2004. http://pastel.archives-ouvertes.fr/pastel-00001988.
Full textAinouz, Samia. "Analyse et traitement des images codées en polarisation." Phd thesis, Université Louis Pasteur - Strasbourg I, 2006. http://tel.archives-ouvertes.fr/tel-00443685.
Full textBen, Arab Taher. "Contribution des familles exponentielles en traitement des images." Phd thesis, Université du Littoral Côte d'Opale, 2014. http://tel.archives-ouvertes.fr/tel-01019983.
Full textDaniel, Tomasz. "Dispositif pour le traitement numérique des images magnétooptiques." Paris 11, 1985. http://www.theses.fr/1985PA112256.
Full textAn electronic system is described which effectively substracts non magnetic contrast of magnetooptical images. At the same time noise is reduced to such a degree that the contrast visibility limit in domain observation is expanded by at least an order of magnitude. The incoming video signal, sampled at 10 Hz and coded at 8 bits is stored in one of the three 512x512x8 image memories. High speed arithmetic processor ads and substracts images, pixel by pixel, at 10 bytes/s. The device is controlled by a 6809 microprocessor. The m in mode of operation is to compare the image of a sample with magnetic domains with the image of the sample in saturated state. The difference is then cumulated to achieve necessary noise reduction. Improvements of longitudinal Kerr effect images are shown
Bes, Marie-Thérèse. "Traitement automatique d'images scintigraphiques : application aux images cardiaques." Toulouse 3, 1987. http://www.theses.fr/1987TOU30054.
Full textContassot-Vivier, Sylvain. "Calculs paralleles pour le traitement des images satellites." Lyon, École normale supérieure (sciences), 1998. http://www.theses.fr/1998ENSL0087.
Full textHoré, Alain. "Traitement des images bidimensionnelles à l'aide des FPGAs /." Thèse, Chicoutimi : Université du Québec à Chicoutimi, 2005. http://theses.uqac.ca.
Full textBes, Marie-Thérèse. "Traitement automatique d'images scintigraphiques application aux images cardiaques /." Grenoble 2 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb37602964k.
Full textBartovsky, Jan. "Hardware architectures for morphological filters with large structuring elements." Thesis, Paris Est, 2012. http://www.theses.fr/2012PEST1060/document.
Full textThis thesis is focused on implementation of fundamental morphological filters in the dedicated hardware. The main objective of this thesis is to provide a programmable and efficient implementation of basic morphological operators using efficient dataflow algorithms considering the entire application point of view. In the first part, we study existing algorithms for fundamental morphological operators and their implementation on different computational platforms. We are especially interested in algorithms using the queue memory because their implementation provides the sequential data access and minimal latency, the properties very beneficial for the dedicated hardware. Then we propose another queue-based arbitrary-oriented opening algorithm that allows for direct granulometric measures. Performance benchmarks of these two algorithms are discussed, too. The second part presents hardware implementation of the efficient algorithms by means of stream processing units. We begin with 1-D dilation unit, then thanks to the separability of dilation we build up 2-D rectangular and polygonal dilation units. The processing unit for arbitrary-oriented opening and pattern spectrum is described as well. We also introduce a method of parallel computation using a few copies of processing units in parallel, thereby speeding up the computation. All proposed processing units are experimentally assessed in hardware by means of FPGA prototypes, and the performance and FPGA occupation results are discussed. In the third part, the proposed units are employed in two diverse applications illustrating thus their capability of addressing performance-demanding, low-power embedded applications. The main contributions of this thesis are: 1) new algorithm for arbitrary oriented opening and pattern spectrum, 2) programmable hardware implementation of fundamental morphological operators with large structuring elements and arbitrary orientation, 3) performance increase obtained through multi-level parallelism. Results suggest that the previously unachievable, real-time performance of these traditionally costly operators can be attained even for long concatenations and high-resolution images
Perrotton, Xavier. "Détection automatique d'objets dans les images numériques : application aux images aériennes." Paris, Télécom ParisTech, 2009. http://www.theses.fr/2009ENST0025.
Full textThis thesis was conducted as part of a CIFRE contract between EADS Innovation Works and Telecom ParisTech. The presented work aims at defining techniques for the detection and localisation of objects in digital images. This work focused on methods based on AdaBoost because of its theoretical and practical effectiveness. A new descriptor robust to background and target texture variations is introduced. A first object detection framework based on this descriptor is proposed and applied. Observed results prove that a vision system can be trained on adapted simulated data and yet be efficient on real images. A first improvement allows the proposed cascade to explore the space of descriptors and thus to improve the modeling of the target. The idea developed here consists in the beginning to build a cascade with one type of descriptors and then introduce new kinds of descriptors when the current descriptor family does not bring enough differentiating information anymore. We present a novel boosting based learning approach which automatically learns a multi-view detector without using intra-class sub-categorization based on prior knowledge. An impicit hierarchical structure enables both a precise modelling and an efficient sharing of descriptors between views. These two complementary approaches are finnally merged to obtain a complete algorithm chain. With the application of this model to different tasks of detection, we verified, on one hand, the efficiency of multi-view model to learn different appearances and poses, and on the other hand, the performances improvement made by the combination of families of descriptors
Tschumperlé, David. "PDE's based regularization of multivalued images and applications." Nice, 2002. http://www.theses.fr/2002NICE5779.
Full textWe are interested in PDE-based approaches for vector-valued image regularization, and its applications for a wide class of interesting image processing problems. The comparative study of existing methods allows us to propose a common mathematical framework, better adapted to understand the underlying diffusion geometry of the regularization processes, as well as design corresponding numerical schemes. Thus we develop a new multivalued image regularization approach that verifies important geometric properties. It can be used in a large range of regularization-related applications. We also tackle the problem of constrained regularization and propose a specific variational formalism unifying in a common framework, the equations acting on direction features~: unit vectors, rotation matrices, diffusion tensors, etc. Proposed solutions are analyzed and used with success to solve applications of interest, such as color image regularization and interpolation, flow visualization, regularization of rigid motions estimated from video sequences, and aided reconstruction of coherent fibers network models in the white matter of the~brain, using DT-MRI imaging
Puteaux, Pauline. "Analyse et traitement des images dans le domaine chiffré." Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTS119.
Full textDuring the last decade, the security of multimedia data, such as images, videos and 3D data, has become a major issue. With the development of the Internet, more and more images are transmitted over networks and stored in the cloud. This visual data is usually personal or may have a market value. Thus, computer tools have been developed to ensure their security.The purpose of encryption is to guarantee the visual confidentiality of images by making their content random. Moreover, during the transmission or archiving of encrypted images, it is often necessary to analyse or process them without knowing their original content or the secret key used during the encryption phase. This PhD thesis proposes to address this issue. Indeed, many applications exist such as secret images sharing, data hiding in encrypted images, images indexing and retrieval in encrypted databases, recompression of crypto-compressed images, or correction of noisy encrypted images.In a first line of research, we present a new method of high-capacity data hiding in encrypted images. In most state-of-the-art approaches, the values of the least significant bits are replaced to achieve the embedding of a secret message. We take the opposing view of these approaches by proposing to predict the most significant bits. Thus, a significantly higher payload is obtained, while maintaining a high quality of the reconstructed image. Subsequently, we showed that it was possible to recursively process all bit planes of an image to achieve data hiding in the encrypted domain.In a second line of research, we explain how to exploit statistical measures (Shannon entropy and convolutional neural network) in small pixel blocks (i.e. with few samples) to discriminate a clear pixel block from an encrypted pixel block in an image. We then use this analysis in an application to correct noisy encrypted images.Finally, the third line of research developed in this thesis concerns the recompression of crypto-compressed images. In the clear domain, JPEG images can be recompressed before transmission over low-speed networks, but the operation is much more complex in the encrypted domain. We then proposed a method for recompressing crypto-compressed JPEG images directly in the encrypted domain and without knowing the secret key, using a bit shift of the reorganized coefficients
Aziz, Fatima. "Approche géométrique couleur pour le traitement des images catadioptriques." Thesis, Limoges, 2018. http://www.theses.fr/2018LIMO0080/document.
Full textThis manuscript investigates omnidirectional catadioptric color images as Riemannian manifolds. This geometric representation offers insights into the resolution of problems related to the distortions introduced by the catadioptric system in the context of the color perception of autonomous systems. The report starts with an overview of the omnidirectional vision, the different used systems, and the geometric projection models. Then, we present the basic notions and tools of Riemannian geometry and its use in the image processing domain. This leads us to introduce some useful differential operators on Riemannian manifolds. We develop a method of constructing a hybrid metric tensor adapted to color catadioptric images. This tensor has the dual characteristic of depending on the geometric position of the image points and their photometric coordinates as well.In this work, we mostly deal with the exploitation of the previously constructed hybrid metric tensor in the catadioptric image processing. Indeed, it is recognized that the Gaussian function is at the core of several filters and operators for various applications, such as noise reduction, or the extraction of low-level characteristics from the Gaussian space- scale representation. We thus build a new Gaussian kernel dependent on the Riemannian metric tensor. It has the advantage of being applicable directly on the catadioptric image plane, also, variable in space and depending on the local image information. As a final part in this thesis, we discuss some possible robotic applications of the hybrid metric tensor. We propose to define the free space and distance transforms in the omni- image, then to extract geodesic medial axis. The latter is a relevant topological representation for autonomous navigation, that we use to define an optimal trajectory planning method
Harp, Josselin. "Contribution à la segmentation des images : applications à l'estimation des plans texturés dans des images planes et omnidirectionnelles." Amiens, 2003. http://www.theses.fr/2003AMIE0313.
Full textHuet-Guillemot, Florence. "Fusion d'images segmentees et interpretees. Application aux images aeriennes." Cergy-Pontoise, 1999. http://biblioweb.u-cergy.fr/theses/99CERG0057.pdf.
Full textTurcotte, Maryse. "Méthode basée sur la texture pour l'étiquetage des images." Sherbrooke : Université de Sherbrooke, 2000.
Find full textPiedpremier, Julien. "Les grandes images." Paris 8, 2005. http://www.theses.fr/2005PA082545.
Full textNajman, Laurent. "Morphologie mathématique, systèmes dynamiques et applications au traitement des images." Habilitation à diriger des recherches, Université de Marne la Vallée, 2006. http://tel.archives-ouvertes.fr/tel-00715406.
Full textNezan, Jean François. "Prototypage rapide d'applications de traitement des images sur systèmes embarqués." Habilitation à diriger des recherches, Université Rennes 1, 2009. http://tel.archives-ouvertes.fr/tel-00564516.
Full textVerdant, Arnaud. "Architectures adaptatives de traitement des images dans le plan focal." Paris 11, 2008. http://www.theses.fr/2008PA112361.
Full textImage sensors are an integral part of our daily lives. These deviees are most commonly implemented in mobile products for which remain strong constraints of energy consumption. Indeed, the images captured by such sensors contain many spatial and time redundancies when considering a video stream. Many data are unnecessarily processed, transmitted and stored, thereby inducing a lack of autonomy in such systems. The thesis work carried out aimed to address this power constraint by defining new architectural approaches to image processing within the matrix of pixels, to adapt sensor resources based on the activity of the observed scene. Thus, new concepts of acquisition and processing related to motion detection have been studied. The processing architecture, derived from subsequently developed algorithms, while offering solutions to ensure the integrity of the analog data. Original modelling methodology was finally implemented in order to validate the proposed concepts, to ensure the processing consistency, robustness and induced consumption. Finally, a demonstrator was designed to validate the silicon implementation of the architecture. The power consumption gains are estimated from 30 to 700 compared to the image sensor sensors state of the art
Lozano, Vincent. "Contribution de l'analyse d'image couleur au traitement des images textile." Saint-Etienne, 1998. http://www.theses.fr/1998STET4003.
Full textRopert, Michaël. "Filtres médians et médians généralisés : application au traitement des images." Rennes 1, 1995. http://www.theses.fr/1995REN10163.
Full textGRATIN, CHRISTOPHE. "De la representation des images au traitement morpholmogique d'images tridimensionnelles." Paris, ENMP, 1993. http://www.theses.fr/1993ENMP0383.
Full textGoudail, François. "Localisation d'objets dans des images fortement bruitées : une approche probabiliste paramétrique." Aix-Marseille 3, 1997. http://www.theses.fr/1997AIX30013.
Full textCHAMBON, ERIC. "Traitement de larges pertes de substances a la face dorsale de la main par lambeaux en ilot preleves a l'avant-bras." Lille 2, 1990. http://www.theses.fr/1990LIL2M186.
Full textChastel, Serge. "Contribution de la théorie des hypergraphes au traitement des images numériques." Saint-Etienne, 2001. http://www.theses.fr/2001STET4008.
Full textRecent developments in image processing have shown the promises of discrete formalizations of the digital picture and their study from the point of view of combinatorics. In this document we focus on the modeling of the digital picture by the means of hypergraphs. . . . [etc. ]
Gousseau, Yann. "Distribution de formes dans les images naturelles." Paris 9, 2000. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2000PA090081.
Full textThiebaut, Carole. "Caractérisation multidimensionnelle des images astronomiques : application aux images TAROT." Toulouse, INPT, 2003. http://www.theses.fr/2003INPT022H.
Full textKouassi, Rémi Kessé. "Etude des transformations linéaires pour la représentation des images multicomposantes : application a la quantification et à la segmentation des images couleur." Dijon, 2000. http://www.theses.fr/2000DIJOS054.
Full textPérez, Patrick. "Modèles et algorithmes pour l'analyse probabiliste des images." [S.l.] : [s.n.], 2003. http://www.irisa.fr/centredoc/publis/HDR/2003/irisapublication.2005-08-03.7343433931.
Full textLi-Thiao-Té, Sébastien. "Traitement du signal et images LC/MS pour la recherche de biomarqueurs." Phd thesis, Cachan, Ecole normale supérieure, 2009. https://theses.hal.science/tel-00466961/fr/.
Full textLiquid chromatography mass spectrometry is a promising technique in analytical chemistry for the discovery of protein biomarkers. This thesis deals with correcting distortions such as LC/MS image alignment and intensity standardization. We then apply a contrario detection, and study the limit of detection of the proposed algorithm
Moghrani, Madjid. "Segmentation coopérative et adaptative d’images multicomposantes : application aux images CASI." Rennes 1, 2007. http://www.theses.fr/2007REN1S156.
Full textThis thesis focuses on cooperative approaches in image segmentation. Two adaptive systems were implemented; the first is parallel and second is sequential. The parallel system is based on competing methods of segmentation by classification. The sequential system runs these methods according to a predefined schedule. The extraction of features for segmentation is performed according of the region’s nature (uniform or textured). Both systems are composed of three main modules. The first module aims to detect the region’s nature of the image (uniforms or textured) in order to adapt further processings. The second module is dedicated to the segmentation of detected regions according to their nature. The segmentation results are assessed and validated at different levels of the segmentation process. The third module merges intermediate results obtained on the two types of areas. Both systems are tested and compared on synthetic and real mono- and multi-component images issued from aerial remote sensing
Dong, Lixin. "Extraction automatique des contours cardiaques sur des images échocardiographiques." Paris 12, 1990. http://www.theses.fr/1990PA120040.
Full textDe, Hauwer Christophe. "Evaluation du comportement de cellules in vitro par traitement des images." Doctoral thesis, Universite Libre de Bruxelles, 1998. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/212010.
Full textBrun-Lecouteulx, Annie. "Essai d'application de l'estimateur de Stein au traitement des images scintigraphiques." Grenoble 2 : ANRT, 1988. http://catalogue.bnf.fr/ark:/12148/cb37612234x.
Full textTupin, Florence. "Champs de Markov sur graphes pour le traitement des images radar /." Paris : École nationale supérieure des télécommunications, 2007. http://catalogue.bnf.fr/ark:/12148/cb41098170v.
Full textLa p. de titre et la couv. portent en plus : "Département Traitement du signal et des images. Groupe Traitement et interprétation des images" Bibliogr. p. 103-117.
Jaouen, Vincent. "Traitement des images multicomposantes par EDP : application à l'imagerie TEP dynamique." Thesis, Tours, 2016. http://www.theses.fr/2016TOUR3303/document.
Full textThis thesis presents several methodological contributions to the processing of vector-valued images, with dynamic positron emission tomography imaging (dPET) as its target application. dPET imaging is a functional imaging modality that produces highly degraded images composed of subsequent temporal acquisitions. Vector-valued images often present some level of redundancy or complementarity of information along the channels, allowing the enhancement of processing results. Our first contribution exploits such properties for performing robust segmentation of target volumes with deformable models.We propose a new external force field to guide deformable models toward the vector edges of regions of interest. Our second contribution deals with the restoration of such images to further facilitate their analysis. We propose a new partial differential equation-based approach that enhances the signal to noise ratio of degraded images while sharpening their edges. Applied to dPET imaging, we show to what extent our methodological contributions can help to solve an open problem in neuroscience : noninvasive quantification of neuroinflammation