Dissertations / Theses on the topic 'Génération d'images'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Génération d'images.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Bonnard, Jennifer. "Génération d'images 3D HDR." Thesis, Reims, 2015. http://www.theses.fr/2015REIMS014/document.
Full textHDR imaging and 3D imaging are two areas in which the simultaneous but separate development has been growing in recent years. On the one hand, HDR (High Dynamic Range) imaging allows to extend the dynamic range of traditionnal images called LDR (Low Dynamic Range). On the other hand, 3Dimaging offers immersion in the shown film with the feeling to be part of the acquired scene. Recently, these two areas have been combined to provide 3D HDR images or videos but few viable solutions existand none of them is available to the public. In this thesis, we propose a method to generate 3D HDR images for autostereoscopic displays by adapting a multi-viewpoints camera to several exposures acquisition.To do that, neutral density filters are fixed on the objectives of the camera. Then, pixel matchingis applied to aggregate pixels that represent the same point in the acquired scene. Finally, radiance is calculated for each pixel of the set of images by using a weighted average of LDR values. An additiona lstep is necessary because some pixels have wrong radiance. We proposed a method based on the color of adjacent pixels and two methods based on the correction of the disparity of those pixels. The first method is based on the disparity of pixels of the neighborhood and the second method on the disparity independently calculated on each color channel. This pipeline allows the generation of 3D HDR image son each viewpoint. A tone-mapping algorithm is then applied on each of these images. Their composition with filters corresponding to the autostereoscopic screen used allows the visualization of the generated 3DHDR image
Chevrier, Christine. "Génération de séquences composées d'images de synthèse et d'images vidéo." Nancy 1, 1996. http://www.theses.fr/1996NAN10121.
Full textThe visual impact assessment of architectural projects in urban environments is usually based on manual drawings, paintings on photographs, scale models or computer-generated images. These techniques are either too expensive or not realistic enough. Strictly using computer images means requiring an accurate 3D model of the environment. Computing such a model takes a long time and the results lack of visual accuracy. Our technique of overlaying computer generated images onto photographs of the environment is considerably more effective and reliable. This method is a promising solution regarding computation time (no accurate 3D model of the environment) as weIl as visual realism (provided by the photograph itself). Such a solution requires nevertheless to solve lots of problems in order to get geometrical and photometrical coherence in the resulting image. To this end, image analysis and image synthesis methods have to be designed and developed. The method is generalized to produce an animated film, and can furthermore greatly increase the realism of the simulation. My Ph. D. Work was to test and integrate various image analysis and synthesis techniques for the composition of computer generated images with pictures or video film. This report explains the steps required for a realistic encrustation. For each of these steps, various techniques were tested in order to be able to choose the most suitable solutions according to the state of the most recent researches and to the applications we were dealing with (architectural and urban projects). The application of this work was the simulation of the Paris Bridges illumination projects. Concurrently to this work, I present a new method for the interpolation of computer images for the generation of a sequence of images. This method requires no approximation of the camera motion. Since video images are interlaced, sequences of computer images need to be interleaved too. The interpolation technique we propose is capable of doing this
Smadja, Laurent. "Génération d'environnements 3D denses à partir d'images panoramiques cylindriques." Paris 6, 2003. http://www.theses.fr/2003PA066488.
Full textChailloux, Cyril. "Recalage d'images sonar par appariemment de régions : application à la génération d'une mosaïque." Télécom Bretagne, 2007. http://www.theses.fr/2007TELB0044.
Full textUnderwater positioning is a recurrent issue, which side-scan sonar imaging or AUV navigation are faced to. Sonar image matching is an essential task to compensate errors of navigation, nevertheless inherent characteristics to sonar image as brightness variations or shadows or occlusions, make this task difficult. This work is consequently devoted to define a robust block matching algorithm. The different relationships between image intensity can be characterized by a stationary relation, a linear, a functional, or a statistic one. We study the best appropriated relation and define the associated similarity measure. Statistic criterions perform better than others, with especially mutual information and correlation ratio. In the frame of a multi-resolution process, coupling both mutual information and correlation ration returns a robust similarity measure. This measure is applied in a block matching approach, where each block is centering on a maximum of saliency point. A Parzen windowing is used to model data. Regularization methods consist in filtering vector fields with a coherence constraint. Real data set are used to validate algorithms and similarity measure results. These algorithms allow to define a georeferenced mosaic
Moreau, Patrick. "Modélisation et génération de dégradés dans le plan discret." Bordeaux 1, 1995. http://www.theses.fr/1995BOR10640.
Full textOsselin, Jean-François. "Génération automatique d'armures de grand rapport. Algorithme génétique." Mulhouse, 2001. http://www.theses.fr/2001MULH0668.
Full textNshare, Abdallah. "Définition et conception d'une nouvelle génération de rétines programmables." Paris 11, 2002. http://www.theses.fr/2002PA112155.
Full textIn this manuscript, we present a new approach of architecture for image processing. The analysis of vision chips, in particular the artificial retinas, shows a bad distribution of the operators of low level images processing. In spite of about twenty year of researches in this domain, this aspect led to circuits of poor resolution integrating operators that have a low flexibility and a very poor programmability. It seemed to us indispensable to find a new compromise between versatility and parallelism. We so proposed a new architecture of artificial retina intended to improve the balance between the computing speed and the flexibilty of vision chips. Our approach consists in moving the set of functions that are generally integrated nearby the sensor, outside of the array of sensors. So, in our approach, the operators are henceforth shared by a set of pixels, and processing is then sequentially performed. This architecture is implemented by an array of sensors associated to a column of processors operating in an malogue-digital mixed mode. For it, we conceived an original mixed processor. It becomes if possible to perform sequences of calculations implementing in situ a wide class of image processing algorithms. To validate this approach, we realized an artificial retina in a 0. 6 [mu]m CMOS technology. This circuit comprises an array of 16x16 pixels associated to a column of 16 processors. This first circuit allowed to validate the architecture and the analogue cells. Processings such as the motion detection or edges detection were programmed. The obtained speed of processing allows to envisage real-time applications for retinas of high resolution (256x256). To widen the practicable field of algorithms, we brought modifications to the basic architecture. These modifications allowed to increase the preciseness of analogue calculations, the speed of processing and to reduce the area of the pixel. We conceived a second circuit which integrates these modifications
Brangoulo, Sébastien. "Codage d'images fixes et de vidéos par ondelette de seconde génération : théorie et applications." Rennes 1, 2005. http://www.theses.fr/2005REN1S003.
Full textPetit, Josselin. "Génération, visualisation et évaluation d'images HDR : application à la simulation de conduite nocturne." Phd thesis, Université Claude Bernard - Lyon I, 2010. http://tel.archives-ouvertes.fr/tel-00707723.
Full textCombaz, Jean. "Utilisation de phénomènes de croissance pour la génération de formes en synthèse d'images." Phd thesis, Université Joseph Fourier (Grenoble), 2004. http://tel.archives-ouvertes.fr/tel-00528689.
Full textBarroso, Nicolas. "Génération d'images intermédiaires pour la création d'animations 2D stylisées à base de marques." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES083.
Full textAs part of my thesis, I am interested in the issue of creating traditional 2D animations, where all the images are handcrafted. Specifically, I explore how computers can assist artists in producing animations efficiently without reducing the artistic creative process. To address this problem, my work falls within the scope of automatic methods, where the animator collaborates iteratively with the computer. I propose a method that takes two keyframe images and a series of 2D vector fields describing the motion in image space of the animation, and generates intermediate images while preserving the given style as an example. My method combines two manual animation techniques: pose-to-pose and frame-by-frame animation, providing strong control by allowing any generated image to be edited in the same way as the example images provided. My research covers several domains: motion analysis, 2D curve control, mark-based rendering, and paint simulation
Chen, Yong. "Analyse et interprétation d'images à l'usage des personnes non-voyantes : application à la génération automatique d'images en relief à partir d'équipements banalisés." Thesis, Paris 8, 2015. http://www.theses.fr/2015PA080046/document.
Full textVisual information is a very rich source of information to which blind and visually impaired people (BVI) not always have access. The presence of images is a real handicap for the BVI. The transcription into an embossed image may increase the accessibility of an image to BVI. Our work takes into account the aspects of tactile cognition, the rules and the recommendations for the design of an embossed image. We focused our work on the analysis and comparison of digital image processing techniques in order to find the suitable methods to create an automatic procedure for embossing images. At the end of this research, we tested the embossed images created by our system with users with blindness. In the tests, two important points were evaluated: The degree of understanding of an embossed image; The time required for exploration.The results suggest that the images made by this system are accessible to blind users who know braille. The implemented system can be regarded as an effective tool for the creation of an embossed image. The system offers an opportunity to generalize and formalize the procedure for creating an embossed image. The system gives a very quick and easy solution.The system can process pedagogical images with simplified semantic contents. It can be used as a practical tool for making digital images accessible. It also offers the possibility of cooperation with other modalities of presentation of the image to blind people, for example a traditional interactive map
François, Michaël. "Génération de nombres pseudo-aléatoires basée sur des systèmes multi-physiques exotiques et chiffrement d'images." Troyes, 2012. http://www.theses.fr/2012TROY0023.
Full textThe use of (pseudo)-random numbers has taken an important dimension in recent decades. Many applications in the field of telecommunications, cryptography, numerical simulations or gambling, have contributed to the development and the use of these numbers. The methods used for the generation of (pseudo)- random numbers are based on two types of processes: physical and algorithmic. In this PhD thesis, two classes of generators based on the principles of physical measurements and mathematical processes are presented. For each class two generators are presented. The first class of generators operates the response of a physical system that serves as a source for the generation of random sequences. This class uses both simulation results and the results of interferometric measurements to produce sequences of random numbers. The second class of generators is based on two types of chaotic functions and uses the outputs of these functions as an index permutation on an initial vector. This PhD thesis also focuses on encryption systems for data protection. Two encryption algorithms using chaotic functions are proposed. These algorithms use a permutation-substitution process on the bits of the original image. A thorough analysis based on statistical tests confirms the relevance of the developped cryptosystems in this PhD thesis manuscript
Dischler, Jean-Michel. "La génération de textures 3D et de textures a microstructure complexe pour la synthese d'images." Université Louis Pasteur (Strasbourg) (1971-2008), 1996. http://www.theses.fr/1996STR13015.
Full textLathuilière, Alexandra. "Génération de mires colorées pour la reconstruction 3D couleur par système stéréoscopique de vision active." Dijon, 2007. http://www.theses.fr/2007DIJOS033.
Full textThe main purpose of this project is the three-dimensional reconstruction of color objects. For that, the principle of a three-dimensional scanner is used: the active stereovision. The calculation of the three-dimensional coordinates is made starting from two points of view, a camera and a LCD projector. The triangulation is calculated between the two sets of information. The work interest is to bring intelligence in the scanning. This project is directed towards a color coded structured light to be able to determine three-dimensional information and color in the same time without loss information and with limiting the errors due to the disappearance of certain points of interest in the too important relief of the studied object. Geometrical calibration and color one are an important stage. The system presented evolves to a concept of multispectral three-dimensional scanner: to improve the mapping and acquisition of the color of the studied object
Bidal, Samuel. "Reconstruction tridimensionnelle d'éléments anatomiques et génération automatique de maillages éléments finis optimisés." Aix-Marseille 2, 2003. http://www.theses.fr/2003AIX20673.
Full textThe aim of this work is to quickly generate good quality models of the human body. We created a method package which generates finite element meshes from pictures of serial slices (taken from anatomic slices, X-ray sanner or MRI). The mesh generation is divided into three main steps : contours detection, 3D reconstruction and meshing. Contour detection methods were chosen to be applicable on a wide range of pictures. 3D reconstruction and meshing methods are new and based on an octahedral lattice. They allow to generate quadrangular or hexahedral elements. The heads organs were chosen to validate the package. We studied other organs too but these work are just given here as examples
Morlans, Richard. "Génération des trajectoires d'un robot d'exploration planétaire utilisant un modèle numérique de terrain issu d'images prises par satellite." Montpellier 2, 1992. http://www.theses.fr/1992MON20119.
Full textZhao, Jiaxin. "Génération de maillage à partir d'images 3D en utilisant l'adaptation de maillage anisotrope et une équation de réinitialisation." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEM004/document.
Full textImaging techniques have well improved in the last decades. They may accurately provide numerical descriptions from 2D or 3D images, opening perspectives towards inner information, not seen otherwise, with applications in different fields, like medicine studies, material science or urban environments. In this work, a technique to build a numerical description under the mesh format has been implemented and used in numerical simulations when coupled to finite element solvers. Firstly, mathematical morphology techniques have been introduced to handle image information, providing the specific features of interest for the simulation. The immersed image method was then proposed to interpolate the image information on a mesh. Then, an iterative anisotropic mesh adaptation operator was developed to construct the optimal mesh, based on the estimated error concerning the image interpolation. The mesh is thus directly constructed from the image information. We have also proposed a new methodology to build a regularized phase function, corresponding to the objects we wish to distinguish from the image, using a redistancing method. Two main advantages of having such function are: the gradient of the regularized function performs better for mesh adaptation; the regularized function may be directly used for the finite element solver. Stabilized finite element flow and advection solvers were coupled to the constructed anisotropic mesh and the redistancing function, allowing its application to multiphase flow numerical simulations. All these developments have been extended in a massively parallel context. An important objective of this work is the simplification of the image based computations, through a modified way to segment the image and by coupling all to an automatic way to construct the mesh used in the finite element simulations
Youssef, Stéphanie. "Aide au concepteur pour la génération de masques analogiques, réutilisables et optimisés, en technologie CMOS nanométrique." Paris 6, 2012. http://www.theses.fr/2012PA066645.
Full textElectronics and semiconductor are evolving at an ever-increasing rate. New technologies are also introduced to extend CMOS into nano/molecular scale MOSFET structures. Tighter time-to-market needs are pressing the need for an automated reliable analog design flow. Automatic layout generation is a key ingredient of such flow whose design challenges are drastically exacerbated when more complex circuits and newer technologies must be hosted. The thesis presents a designer-assisted, reusable and optimized analog layout generation flow that addresses the challenges facing the automation of analog circuits. It is part of CHAMS project developed in LIP6. It has been developed in 3 phases. Firstly, we designed a library of analog Smart Devices that are parameterized, reusable, and with different layout styles. A generic language was used to describe these Devices to ease the technology migration and the layout-induced parameters calculation. Secondly, we developed the tools to generate the layout of complex circuits using the library of Smart Devices, the technology files and the designer's geometrical placement constraints needed to guarantee a certain performance. An intelligent topological representation was used to efficiently place the circuit modules given the designer's set of constraints. Thirdly, we created algorithms to optimize the layouts for different aspect ratios to minimize the area and the routing parasitic. In parallel the algorithm directly calculates and back-annotates the layout-dependent parasitic parameters. This work provides a reliable and efficient solution to allow a fast, optimized and parasitic effects-aware layout generation of complex analog circuits
Bandon, David. "Maria : une optimisation adaptative d'un archivage d'images médicales par transfert anticipé." Compiègne, 1996. http://www.theses.fr/1996COMPD961.
Full textPeschoud, Cécile. "Etude de la complémentarité et de la fusion des images qui seront fournies par les futurs capteurs satellitaires OLCI/Sentinel 3 et FCI/Meteosat Troisième Génération." Thesis, Toulon, 2016. http://www.theses.fr/2016TOUL0012/document.
Full textThe objective of this thesis was to propose, validate and compare fusion methods of images provided by a Low Earth Orbit multispectral sensor and a geostationary multispectral sensor in order to obtain water composition maps with spatial details and high temporal resolution. Our methodology was applied to OLCI Low Earth Orbit sensor on Sentinel-3 and FCI Geostationary Earth Orbit (GEO) sensor on Meteosat Third Generation. Firstly, the sensor sensivity, regarding the water color, was analyzed. As the images from both sensors were not available, they were simulated on the Golf of Lion, thanks to hydrosol maps (chl, SPM and CDOM) and radiative transfer models (Hydrolight and Modtran). Two fusion methods were then adapted and tested with the simulated images: the SSTF (Spatial, Spectral, Temporal Fusion) method inspired from the method developed by (Vanhellemont et al., 2014)) and the STARFM (Spatial Temporal Adaptative Reflectance Fusion Model) method from (Gao et al., 2006)). The fusion results were then validated with the simulated reference images and by estimating the hydrosol maps from the fusion images and comparing them with the input maps of the simulation process. To improve FCI SNR, a temporal filtering was proposed. Finally, as the aim is to obtain a water quality indicator, the fusion methods were adapted and tested on the hydrosol maps estimated with the FCI and OLCI simulated images
Lapray, Pierre-Jean. "Nouvelle génération de systèmes de vision temps réel à grande dynamique." Phd thesis, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-00947744.
Full textMOSSET, Alexis. "Étude expérimentale des fluctuations d'origine quantique en amplification paramétrique d'images." Phd thesis, Université de Franche-Comté, 2004. http://tel.archives-ouvertes.fr/tel-00009322.
Full textBelloulata, Kamel. "Compression d'images par fractale : études sur la mesure et le domaine de recherche de l'autosimilarité (spatial ou transforme) et sur l'accélération de la génération du modèle fractal." Lyon, INSA, 1998. http://www.theses.fr/1998ISAL0013.
Full textThe goal of this work is to develop an optimize method to the image compression based on fractal theory. The fractal coding is based on the detection, the measure and a coding of the self-similarity presented in the natural pictures. It is well known that the quality of decoded images depends directly on the efficiency of the measurement of the self-similarity between objects of the original image. We present two coding schemes based on L͚ and Lα metrics. The performance of these schemes is compared with various standard images in terms •of PPSNR and maximum local distortion. Concerning the hybrid schemes, we propose a new image compression scheme based on fractal coding of the coefficients of a wavelet transform, in order to take into account the advantages of the two theories (wavelet and fractal) and the self-similarity observed in each sub band. Concerning the acceleration of the fractal coding, a new approach using a hybrid fractal/QV / sub band coding scheme is developed and compared. This approach uses a fast non-iterative algorithm for black clustering in order to code a wavelet • transform coefficients. Our fast non-iterative algorithm is compared with other algorithm in terms of rate improvement obtained
Kieu, Van Cuong. "Modèle de dégradation d’images de documents anciens pour la génération de données semi-synthétiques." Thesis, La Rochelle, 2014. http://www.theses.fr/2014LAROS029/document.
Full textIn the last two decades, the increase in document image digitization projects results in scientific effervescence for conceiving document image processing and analysis algorithms (handwritten recognition, structure document analysis, spotting and indexing / retrieval graphical elements, etc.). A number of successful algorithms are based on learning (supervised, semi-supervised or unsupervised). In order to train such algorithms and to compare their performances, the scientific community on document image analysis needs many publicly available annotated document image databases. Their contents must be exhaustive enough to be representative of the possible variations in the documents to process / analyze. To create real document image databases, one needs an automatic or a manual annotation process. The performance of an automatic annotation process is proportional to the quality and completeness of these databases, and therefore annotation remains largely manual. Regarding the manual process, it is complicated, subjective, and tedious. To overcome such difficulties, several crowd-sourcing initiatives have been proposed, and some of them being modelled as a game to be more attractive. Such processes reduce significantly the price andsubjectivity of annotation, but difficulties still exist. For example, transcription and textline alignment have to be carried out manually. Since the 1990s, alternative document image generation approaches have been proposed including in generating semi-synthetic document images mimicking real ones. Semi-synthetic document image generation allows creating rapidly and cheaply benchmarking databases for evaluating the performances and trainingdocument processing and analysis algorithms. In the context of the project DIGIDOC (Document Image diGitisation with Interactive DescriptiOn Capability) funded by ANR (Agence Nationale de la Recherche), we focus on semi-synthetic document image generation adapted to ancient documents. First, we investigate new degradation models or adapt existing degradation models to ancient documents such as bleed-through model, distortion model, character degradation model, etc. Second, we apply such degradation models to generate semi-synthetic document image databases for performance evaluation (e.g the competition ICDAR2013, GREC2013) or for performance improvement (by re-training a handwritten recognition system, a segmentation system, and a binarisation system). This research work raises many collaboration opportunities with other researchers to share our experimental results with our scientific community. This collaborative work also helps us to validate our degradation models and to prove the efficiency of semi-synthetic document images for performance evaluation and re-training
Callahan, Michael. "Analyse de la cinétique de transformation et des instabilités de déformation dans des aciers TRIP "Moyen Manganèse" de 3ème génération." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLC065/document.
Full textThis thesis studies the mechanical behavior of a 0.2C-5Mn-2.5Al Medium Mn steel that exhibits a very high degree of work hardening due to transformation-induced plasticity (TRIP) during plastic deformation. During TRIP, paramagnetic retained austenite is transformed to ferromagnetic martensite with the application of plastic strain and generates a significant amount of work hardening. The rate of work hardening is seen to vary greatly depending on processing parameters—notably the intercritical annealing temperature. These steels also often deform heterogeneously through the propagation of Lüders or PLC strain bands.This research develops a method to characterize the kinetics of the TRIP effect through measurements of the samples magnetic properties. The method is novel in that it is performed in-situ with no effect on the tensile test and is able to correct for the effects of the applied stress on the magnetic properties. The results of these experiments were compared to characterizations of the strain bands to demonstrate that TRIP coincides with the passage of a Lüders or PLC band. The strain rate sensitivity of the steels is analyzed and the presence and type of PLC bands are characterized with respect to the transformation kinetics
Bijar, Ahmad. "Recalages non-linéaires pour la génération automatique de modèles biomécaniques patients-spécifiques à partir d'imagerie médicale." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAS010.
Full textDuring the last years, there has been considerable interest in using computer-aided medical design, diagnosis, and decision-making techniques that are rapidly entering the treatment mainstreams. Finite Element Analysis (FEA) of 3D models is one of the most popular and efficient numerical methods that can be utilized for solving complex problems like deformation of soft tissues or orthopedic implant designs/configurations. However, the accuracy of solutions highly depends upon the quality and accuracy of designed Finite Element Meshes (FEMs). The generation of such high-quality subject/patient-specific meshes can be extremely time consuming and labor intensive as the process includes geometry extraction of the target organ and meshing algorithms. In clinical applications where the patient specifiity has to be taken into account via the generation of adapted meshes these problems become methodological bottlenecks. In this context, various studies have addressed these challenges by bypassing the meshing phase by employing atlas-based frameworks using the deformation of an atlas FE mesh. However, these methods still rely on the geometrical description of the target organ, such as contours, 3D surface models, or a set of land-marks.In this context, the aim of this thesis is to investigate how registration techniques can overcome these bottlenecks of atlas-based approaches.We first propose an automatic atlas-based method that includes the volumetric anatomical image registration and the morphing of an atlas FE mesh. The method extracts a 3D transformation by registering the atlas' volumetric image to the subject's one. The subject-specific mesh is then generated by deforming a high-quality atlas FE mesh using the derived transformation. The registration process is designed is such a way to preserve the regularity and the quality of meshes for subsequent FEAs. A first step towards the evaluation of our approach, namely the accuracy of the inter-subject registration process, is provided using a data set of CT ribcage. Then, subject-specific tongue meshes are generated for two healthy subjects and two patients suffering from tongue cancer, in pre- and post-surgery conditions. In order to illustrate a tentative fully automatic process compatible with the clinical constraints, some functional consequences of a tongue surgery are simulated for one of the patients, where the removal of the tumor and the replacement of the corresponding tissues with a passive flap are modeled. With the extraction of any formal priorknowledge on the shape of the target organ and any meshing algorithm, high-quality subject-specific FE meshes are generated while subject’s geometrical properties are successfully captured.Following this method, we develop an original atlas-based approach that employs the information provided by the anatomical images and diffusion tensor imaging (DTI) based muscle fibers for the recognition and registration of fiber-bundles that can be integrated in the subject-specific FE meshes. In contrast to the DT MR images registration techniques that include reorientation of tensors within or after the transformation estimation, our methodology avoids this issue and directly aligns fiber-bundles. This also enables one to handel limited or distorted DTIs by deformation of an atlas fibers’ structure according to the most reliable and non-distorted subject’s ones. Such a manner becomes very important, since the classification and the determination of muscular sub-structures need manual intervention of thousands or millions of fibers for each subject, which are influenced by the limitations associated with the DTI image acquisition process and fiber tractography techniques. To evaluate the performance of our method in the recognition of subject’s fiber-bundles and accordingly in the deformation of the atlas ones, a simulated data set is utilized. In addition, feasibility of our method is demonstrated on acquired human tongue data set
Benali, Aadil. "Contribution à l'amélioration de la qualité du test de circuits imprimés nus : exploitation des informations CAO, par des techniques de traitement d'images, en vue de la génération des données du test électrique." Grenoble INPG, 1996. http://www.theses.fr/1996INPG0189.
Full textVilléger, Emmanuel. "Constance de largeur et désocclusion dans les images digitales." Phd thesis, Université de Nice Sophia-Antipolis, 2005. http://tel.archives-ouvertes.fr/tel-00011229.
Full textnous regroupons des points lumineux et/ou des objets selon certaines
règles pour former des objets plus gros, des Gestalts.
La première partie de cette thèse est consacrée à la constance de
largeur. La Gestalt constance de largeur regroupe des points situés
entre deux bords qui restent parallèles. Nous cherchons donc dans les
images des courbes ``parallèles.'' Nous voulons faire une détection
a contrario, nous proposons donc une quantification du ``non
parallélisme'' de deux courbes par trois méthodes. La première méthode
utilise un modèle de génération de courbes régulières et nous
calculons une probabilité. La deuxième méthode est une méthode de
simulation de type Monte-Carlo pour estimer cette probabilité. Enfin
la troisième méthode correspond à un développement limité de la
première en faisant tendre un paramètre vers 0 sous certaines
contraintes. Ceci conduit à une équation aux dérivées partielles
(EDP). Parmi ces trois méthodes la méthode de type Monte-Carlo est
plus robuste et plus rapide.
L'EDP obtenue est très similaire à celles utilisées pour la
désocclusion d'images. C'est pourquoi dans la deuxième partie de cette
thèse nous nous intéressons au problème de la désocclusion. Nous
présentons les méthodes existantes puis une nouvelle méthode basée sur
un système de deux EDPs dont l'une est inspirée de celle de la
première partie. Nous introduisons la probabilité de l'orientation du
gradient de l'image. Nous prenons ainsi en compte l'incertitude sur
l'orientation calculée du gradient de l'image. Cette incertitude est
quantifiée en relation avec la norme du gradient.
Avec la quantification du non parallélisme de deux courbes, l'étape
suivante est la détection de la constance de largeur dans
les images. Il faut alors définir un seuil pour sélectionner les
bonnes réponses du détecteur et surtout parmi les réponses définir
des réponses ``maximales.'' Le système d'EDPs pour
la désocclusion dépend de beaucoup de paramètres, il faut trouver une
méthode de calibration des paramètres pour obtenir de bons résultats
adaptés à chaque image.
Pelcat, Maxime. "Prototypage Rapide et Génération de Code pour DSP Multi-Coeurs Appliqués à la Couche Physique des Stations de Base 3GPP LTE." Phd thesis, INSA de Rennes, 2010. http://tel.archives-ouvertes.fr/tel-00578043.
Full textDurand, Brieux. "Conception et réalisation d'une nouvelle génération de nano-capteurs de gaz à base de nanofils semi-conducteurs." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30240.
Full textIn recent years, efforts of research and development for gas sensors converged to use nanomaterials to optimize performance. This new generation promises many advantages especially in miniaturization and reduction of energy consumption. Furthermore, the gas detection parameters (sensitivity, detection limit, response time ...) are improved due to the high surface/volume ratio of the sensitive part. Thus, this sensors can be integrated in ultrasensitive detection systems, autonomous, compact and transportable. In this thesis, we propose to use 3D semiconductor nanowires networks to create highly sensitive and selective gas sensors. The objective of this work is to provide a highly sensitive sensor, featuring a low detection limit (in the ppb range) and embeddable in CMOS devices. In addition process is generic and adaptable to many types of materials to discriminate several gas and converge to electronic nose. The first part of the dissertation is based on development of a large scale, reproducible, compatible with Si processing industry and conventional tools (CMOS), to obtain a sensor based on a 3D nanowire architecture. The device is composed by two symmetrical aluminum contacts at each extremity of the nanowires, including a top contact done by air bridge approach. The second part of this work presents the gas performances of components and working mechanisms associated. A very high response (30%) is obtained at 50 ppb of NO2, compare to the state of the art, 25% reached for 200 ppb. This approach can measure selectively very low concentrations of gas (<1 ppb) in real working conditions: moisture (tested up to 70% moisture) and mixing with other more concentrated gas (interfering gas). In addition, the reversibility of the sensor is natural and occurs at room temperature without requiring specific conditions
Tan, Shengbiao. "Contribution à la reconnaissance automatique des images : application à l'analyse de scènes de vrac planaire en robotique." Paris 11, 1987. http://www.theses.fr/1987PA112349.
Full textA method for object modeling and overlapped object automatic recognition is presented. Our work is composed of three essential parts: image processing, object modeling, and evaluation, implementation of the stated concepts. In the first part, we present a method of edge encoding which is based on a re-sampling of the data encoded according to Freeman, this method generates an isotropie, homogenous and very precise representation. The second part relates to object modeling. This important step makes much easier the recognition work. The new method proposed characterizes a model with two groups of information : the description group containing the primitives, the discrimination group containing data packs, called "transition vectors". Based on this original method of information organization, a "relative learning" is able to select, to ignore and to update the information concerning the objects already learned, according to the new information to be included into the data base. The recognition is a two - pass process: the first pass determines very efficiently the presence of objects by making use of each object's particularities, and this hypothesis is either confirmed or rejected by the following fine verification pass. The last part describes in detail the experimentation results. We demonstrate the robustness of the algorithms with images in both poor lighting and overlapping objects conditions. The system, named SOFIA, has been installed into an industrial vision system series and works in real time
Samuth, Benjamin. "Ηybrid mοdels cοmbining deep neural representatiοns and nοn-parametric patch-based methοds fοr phοtοrealistic image generatiοn." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMC249.
Full textImage generation has encountered great progress thanks to the quickevolution of deep neural models. Their reach went beyond thescientific domain and thus multiple legitimate concerns and questionshave been raised, in particular about how the training data aretreated. On the opposite, lightweight and explainable models wouldbe a fitting answer to these emerging problematics, but their qualityand range of applications are limited.This thesis strives to build “hybrid models”. They would efficientlycombine the qualities of lightweight or frugal methods with theperformance of deep networks. We first study the case of artisticstyle transfer with a multiscale and constrained patch-basedmethod. We qualitatively find out the potential of perceptual metricsin the process. Besides, we develop two hybrid models forphotorealistic face generation, each built around a pretrainedauto-encoder. The first model tackles the problem of few-shot facegeneration with the help of latent patches. Results shows a notablerobustness and convincing synthesis with a simple patch-basedsequential algorithm. The second model uses Gaussian mixtures modelsas a way to generalize the previous method to wider varieties offaces. In particular, we show that these models perform similarly toother neural methods, while removing a non-negligible number ofparameters and computing steps at the same time
Cao, Yi-Heng. "Apprentissage génératif pour la synthèse d'images médicales dynamiques 4D." Electronic Thesis or Diss., Brest, 2024. http://www.theses.fr/2024BRES0004.
Full textFour-dimensional computed tomography (4DCT) involves reconstructing an acquisition in multiple phases to track the movements of internal organs and tumors. It is used routinely for radiotherapy treatment planning of lung cancer, but it exposes patients to higher radiation doses, up to six times greater than those of a conventional threedimensional computed tomography (3DCT). Deep learning methods from the field of computer vision are gaining significant interest within the medical imaging community. Among these approaches, generative models stand out due to their ability to generate synthetic images that faithfully replicate the appearance and statistical characteristics of images acquired from real systems. In this thesis, we explore the use of a generative model for dynamic image generation. We propose a model capable of generating patient-specific respiratory motion from a diagnostic 3DCT image and respiratory data. The goal is to enable radiologists to delineate target volumes and organs at risk, as well as perform dose calculations on these dynamic synthetic images. This method would reduce the need for a 4DCT acquisition, thereby reducing the patient’s radiation exposure
Li, Huiyu. "Exfiltration et anonymisation d'images médicales à l'aide de modèles génératifs." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4041.
Full textThis thesis aims to address some specific safety and privacy issues when dealing with sensitive medical images within data lakes. This is done by first exploring potential data leakage when exporting machine learning models and then by developing an anonymization approach that protects data privacy.Chapter 2 presents a novel data exfiltration attack, termed Data Exfiltration by Compression (DEC), which leverages image compression techniques to exploit vulnerabilities in the model exporting process. This attack is performed when exporting a trained network from a remote data lake, and is applicable independently of the considered image processing task. By exploring both lossless and lossy compression methods, this chapter demonstrates how DEC can effectively be used to steal medical images and reconstruct them with high fidelity, using two public CT and MR datasets. This chapter also explores mitigation measures that a data owner can implement to prevent the attack. It first investigates the application of differential privacy measures, such as Gaussian noise addition, to mitigate this attack, and explores how attackers can create attacks resilient to differential privacy. Finally, an alternative model export strategy is proposed which involves model fine-tuning and code verification.Chapter 3 introduces the Generative Medical Image Anonymization framework, a novel approach to balance the trade-off between preserving patient privacy while maintaining the utility of the generated images to solve downstream tasks. The framework separates the anonymization process into two key stages: first, it extracts identity and utility-related features from medical images using specially trained encoders; then, it optimizes the latent code to achieve the desired trade-off between anonymity and utility. We employ identity and utility encoders to verify patient identities and detect pathologies, and use a generative adversarial network-based auto-encoder to create realistic synthetic images from the latent space. During optimization, we incorporate these encoders into novel loss functions to produce images that remove identity-related features while maintaining their utility to solve a classification problem. The effectiveness of this approach is demonstrated through extensive experiments on the MIMIC-CXR chest X-ray dataset, where the generated images successfully support lung pathology detection.Chapter 4 builds upon the work from Chapter 4 by utilizing generative adversarial networks (GANs) to create a more robust and scalable anonymization solution. The framework is structured into two distinct stages: first, we develop a streamlined encoder and a novel training scheme to map images into a latent space. In the second stage, we minimize the dual-loss functions proposed in Chapter 3 to optimize the latent representation of each image. This method ensures that the generated images effectively remove some identifiable features while retaining crucial diagnostic information. Extensive qualitative and quantitative experiments on the MIMIC-CXR dataset demonstrate that our approach produces high-quality anonymized images that maintain essential diagnostic details, making them well-suited for training machine learning models in lung pathology classification.The conclusion chapter summarizes the scientific contributions of this work, and addresses remaining issues and challenges for producing secured and privacy preserving sensitive medical data
Simon, Chane Camille. "Intégration de systèmes d'acquisition de données spatiales et spectrales haute résolution, dans le cadre de la génération d'informations appliquées à la conservation du patrimoine." Thesis, Dijon, 2013. http://www.theses.fr/2013DIJOS008/document.
Full textThe concern and interest of this PhD thesis is the registration of featureless 3D and multispectral datasets describing cultural heritage objects.In this context, there are few natural salient features between the complementary datasets, and the use of targets is generally proscribed.We thus develop a technique based on the photogrammetric tracking of the acquisition systems in use.A series of simulations was performed to evaluate the accuracy of our method in three configurations chosen to represent a variety of cultural heritage objects.These simulations show that we can achieve a spatial tracking accuracy of 0.020 mm and an angular accuracy of 0.100 mrad using four 5 Mpx cameras when digitizing an area of 400 mm x 700 mm. The accuracy of the final registration relies on the success of a series of optical and geometrical calibrations and their stability for the duration of the full acquisition process.The accuracy of the tracking and registration was extensively tested in laboratory settings. We first evaluated the potential for multiview 3D registration. Then, the method was used for to project of multispectral images on 3D models.Finally, we used the registered data to improve the reflectance estimation from the multispectral datasets
Mpe, A. Guilikeng Albert. "Un système de prédiction/vérification pour la localisation d'objets tridimentionnels." Compiègne, 1990. http://www.theses.fr/1990COMPD286.
Full textRoca, Vincent. "Harmonisation multicentrique d'images IRM du cerveau avec des modèles génératifs non-supervisés." Electronic Thesis or Diss., Université de Lille (2022-....), 2023. http://www.theses.fr/2023ULILS060.
Full textMagnetic resonance imaging (MRI) enables the acquisition of brain images used in the study of neurologic and psychiatric diseases. MR images are more and more used in statistical studies to identify biomarkers and for predictive models. To improve statistical power, these studies sometimes pool data acquired with different machines, which may introduce technical variability and bias into the analysis of biological variabilities. In the last few years, harmonization methods have been proposed to limit the impact of these variabilities. Many studies have notably worked on generative models based on unsupervised deep learning. The doctoral research is within the context of these models, which constitute a promising but still exploratory research field. In the first part of this manuscript, a review of the prospective harmonization methods is proposed. Different methods consisting in normalization applied at the image level, domain translation or style transfer are described to understand their respective issues, with a special focus on unsupervised generative models. The second part is about the methods for evaluation of retrospective harmonization. A review of these methods is first conducted. The most common rely on “traveling” subjects to assume ground truths for harmonization. The review also presents evaluations employed in the absence of such subjects: study of inter-domain differences, biological patterns and performances of predictive models. Experiments showing limits of some approaches commonly employed and important points to consider for their use are then proposed. The third part presents a new model for harmonization of brain MR images based on a CycleGAN architecture. In contrast with the previous works, the model is three-dimensional and processes full volumes. MR images from six datasets that vary in terms of acquisition parameters and age distributions are used to test the method. Analyses of intensity distributions, brain volumes, image quality metrics and radiomic features show an efficient homogenisation between the different sites of the study. Next, the conservation and the reinforcement of biological patterns are demonstrated with an analysis of the evolution of gray-matter volume estimations with age, experiments of age prediction, ratings of radiologic patterns in the images and a supervised evaluation with a traveling subject dataset. The fourth part also presents an original harmonization method with major updates of the first one in order to establish a “universal” generator able to harmonize images without knowing their domain of origin. After a training with data acquired on eleven MRI scanners, experiments on images from sites not seen during the training show a reinforcement of brain patterns relative to age and Alzheimer after harmonization. Moreover, comparisons with other intensity harmonization approaches suggest that the model is more efficient and more robust to different tasks subsequent to harmonization. These different works are a significant contribution to the domain of retrospective harmonization of brain MR images. The bibliographic documentations indeed provide a methodological knowledge base for the future studies in this domain, whether for harmonization in itself or for validation. In addition, the two developed models are two robust tools publicly available that may be integrated in future MRI multicenter studies
Landes, Pierre-Edouard. "Extraction d'information pour l'édition et la synthèse par l'exemple en rendu expressif." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00637651.
Full textSallé, Guillaume. "Apprentissage génératif à bas régime des données pour la segmentation d'images en oncologie." Electronic Thesis or Diss., Brest, 2024. http://www.theses.fr/2024BRES0032.
Full textIn statistical learning, the performance of models is affected by various biases present within the data, including data scarcity and domain shift. This thesis focuses on reducing their impact in the field of pathological structure segmentation in medical imaging. Our goal is to minimize data discrepancies at the region of interest (ROI) level between the training source domain and the target deployment domain, whether it is intrinsic to the data or caused by the limited data availability. To this end, we present an adaptive data augmentation strategy, based on the analysis of the intensity distribution of the ROIs in the deployment domain.A first contribution, which we call naive augmentation, consists of altering the appearance of the training ROIs to better match the characteristics of the ROIs in the deployment domain. A second augmentation, complementing the first, makes the alteration more realistic relative to the properties of the target domain by harmonizing the characteristics of the altered image. For this, we employ a generative model trained on a single unlabeled image from the deployment domain (one-shot approach), making this technique usable in any data regime encountered. In this way, we enhance the robustness of the downstream segmentation model forROIs whose characteristics were initially underrepresented in the deployment domain. The effectiveness of this method is evaluated under various data regimes and in different clinical contexts (MRI, CT, CXR). Our approach demonstrated impressive results in a tumor segmentation challenge at MICCAI 2022
Rana, Aakanksha. "Analyse d'images haute gamme dynamique." Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0015.
Full textHigh Dynamic Range (HDR) imaging enables to capture a wider dynamic range and color gamut, thus enabling us to draw on subtle, yet discriminating details present both in the extremely dark and bright areas of a scene. Such property is of potential interest for computer vision algorithms where performance degrades substantially when the scenes are captured using traditional low dynamic range (LDR) imagery. While such algorithms have been exhaustively designed using traditional LDR images, little work has been done so far in contex of HDR content. In this thesis, we present the quantitative and qualitative analysis of HDR imagery for such task-specific algorithms. This thesis begins by identifying the most natural and important questions of using HDR content for low-level feature extraction task, which is of fundamental importance for many high-level applications such as stereo vision, localization, matching and retrieval. By conducting a performance evaluation study, we demonstrate how different HDR-based modalities enhance algorithms performance with respect to LDR on a proposed dataset. However, we observe that none of them can optimally to do so across all the scenes. To examine this sub-optimality, we investigate the importance of task-specific objectives for designing optimal modalities through an experimental study. Based on the insights, we attempt to surpass this sub-optimality by designing task-specific HDR tone-mapping operators (TMOs). In this thesis, we propose three learning based methodologies aimed at optimal mapping of HDR content to enhance the efficiency of local features extraction at each stage namely, detection, description and final matching
Ben, aziza Sassi. "Etude d'un système de conversion analogique-numérique rapide de grande résolution adapté aux nouvelles générations de capteurs d'images CMOS." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAT056.
Full textCMOS technologies represent nowadays more than 90% of image sensors market given their features namely the possibility of integrating entire intelligent systems on the same chip (SoC = System-On-Chip). Thereby, allowing the implementation of more and more complex algorithms in the new generations of image sensors.New techniques have emerged like high dynamic range reconstruction which requires the acquisition of several images to build up one, thus multiplying the frame rate.These new constraints require a drastic increase of image rate for sensors ofconsiderable size (Up to 30 Mpix and more). At the same time, the ADCresolution has to be increased to be able to extract more details (until 14 bits).With all these demanding specifications, analog-to-digital conversion capabilities have to be boosted as far as possible.These capabilities can be distinguished into two main research axes representing the pillars of the PhD work, namely:+ The study of the reachable limits in terms of performance: Speed, Resolution,Low Noise, Low power consumption and small design pitch.+ The management of the highly parallel operation linked to the structure of animage sensor. Solutions have to be found so as to avoid image artefacts andpreserve the image quality
Bouchard, Guillaume. "Les modèles génératifs en classification supervisée et applications à la catégorisation d'images et à la fiabilité industrielle." Phd thesis, Université Joseph Fourier (Grenoble), 2005. http://tel.archives-ouvertes.fr/tel-00541059.
Full textDurnez, Clémentine. "Analyse des fluctuations discrètes du courant d’obscurité dans les imageurs à semi-conducteurs à base de silicium et Antimoniure d’Indium." Thesis, Toulouse, ISAE, 2017. http://www.theses.fr/2017ESAE0030/document.
Full textImaging has always been an interesting field, all the more so as it is nowpossible to see further than human eyes in the infrared and ultraviolet spectra. For each fieldof application, materials are more or less adapted : in order to capture visible light, Siliconis a good candidate, because it has been widely studied, and is also used in our everydaylife. Concerning the infrared, more particularly the MWIR spectral band, InSb has provedto be stable and reliable, even if it need to operate at cryogenic temperatures because ofa narrow bandgap.. In this work, a parasitic signal called Random Telegraph Signal (RTS)which appears in both materials (and also others, such as HgCdTe or InGaAs) is analyzed.This signal comes from the pixel photodiiode and corresponds to a discrete dark currentfluctuation with time, like blinking signals. This can cause detector calibration troubles, orfalse star detection for example. This study aims at characterizing RTS and localize the exactorigin in the photodiode in order to be able to predict or mitigate the phenomenon
Tata, Zafiarifety Christian. "Simulation et traitement des données d’un imageur à rayons Gamma pour une nouvelle génération de caméras Compton." Electronic Thesis or Diss., Troyes, 2019. http://www.theses.fr/2019TROY0028.
Full textThe localization of radioactivity is a crucial step in the dismantling of nuclear power plants. For this purpose, several detection systems have been developed, such as the pinhole camera, using lead or tungsten collimators, but having as main disadvantage a low detection efficiency. The Compton camera uses the kinematics of the Compton broadcast. It represents a very promising alternative compared to conventional systems because it has several advantages such as: high detection efficiency, reconstruction of radioactive source images with high spatial resolution and wide field of view, and the ability to perform spectroscopy with good energy resolution. So, in this work we developed a new Compton camera based on the use of two monolithic crystals from Cebr3 equipped with Philips DPC3200 photodetector and assembled with materials and processes developed by Damavan for obtain the detection heads of optimal quality and adapted to the constraints of the Compton camera. We have thus set up a procedure for the calibration of the time and energy of the detection heads. We also implemented a new position calculation algorithm based on the use of a new model simulated by Monte Carlo. Finally, we carried out a global evaluation of the camera’s performance, once the basic concepts in its development were tested: time, energy and position
Grechka, Asya. "Image editing with deep neural networks." Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS683.pdf.
Full textImage editing has a rich history which dates back two centuries. That said, "classic" image editing requires strong artistic skills as well as considerable time, often in the scale of hours, to modify an image. In recent years, considerable progress has been made in generative modeling which has allowed realistic and high-quality image synthesis. However, real image editing is still a challenge which requires a balance between novel generation all while faithfully preserving parts of the original image. In this thesis, we will explore different approaches to edit images, leveraging three families of generative networks: GANs, VAEs and diffusion models. First, we study how to use a GAN to edit a real image. While methods exist to modify generated images, they do not generalize easily to real images. We analyze the reasons for this and propose a solution to better project a real image into the GAN's latent space so as to make it editable. Then, we use variational autoencoders with vector quantification to directly obtain a compact image representation (which we could not obtain with GANs) and optimize the latent vector so as to match a desired text input. We aim to constrain this problem, which on the face could be vulnerable to adversarial attacks. We propose a method to chose the hyperparameters while optimizing simultaneously the image quality and the fidelity to the original image. We present a robust evaluation protocol and show the interest of our method. Finally, we abord the problem of image editing from the view of inpainting. Our goal is to synthesize a part of an image while preserving the rest unmodified. For this, we leverage pre-trained diffusion models and build off on their classic inpainting method while replacing, at each denoising step, the part which we do not wish to modify with the noisy real image. However, this method leads to a disharmonization between the real and generated parts. We propose an approach based on calculating a gradient of a loss which evaluates the harmonization of the two parts. We guide the denoising process with this gradient
Ozcelik, Furkan. "Déchiffrer le langage visuel du cerveau : reconstruction d'images naturelles à l'aide de modèles génératifs profonds à partir de signaux IRMf." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES073.
Full textThe great minds of humanity were always curious about the nature of mind, brain, and consciousness. Through physical and thought experiments, they tried to tackle challenging questions about visual perception. As neuroimaging techniques were developed, neural encoding and decoding techniques provided profound understanding about how we process visual information. Advancements in Artificial Intelligence and Deep Learning areas have also influenced neuroscientific research. With the emergence of deep generative models like Variational Autoencoders (VAE), Generative Adversarial Networks (GAN) and Latent Diffusion Models (LDM), researchers also used these models in neural decoding tasks such as visual reconstruction of perceived stimuli from neuroimaging data. The current thesis provides two frameworks in the above-mentioned area of reconstructing perceived stimuli from neuroimaging data, particularly fMRI data, using deep generative models. These frameworks focus on different aspects of the visual reconstruction task than their predecessors, and hence they may bring valuable outcomes for the studies that will follow. The first study of the thesis (described in Chapter 2) utilizes a particular generative model called IC-GAN to capture both semantic and realistic aspects of the visual reconstruction. The second study (mentioned in Chapter 3) brings new perspective on visual reconstruction by fusing decoded information from different modalities (e.g. text and image) using recent latent diffusion models. These studies become state-of-the-art in their benchmarks by exhibiting high-fidelity reconstructions of different attributes of the stimuli. In both of our studies, we propose region-of-interest (ROI) analyses to understand the functional properties of specific visual regions using our neural decoding models. Statistical relations between ROIs and decoded latent features show that while early visual areas carry more information about low-level features (which focus on layout and orientation of objects), higher visual areas are more informative about high-level semantic features. We also observed that generated ROI-optimal images, using these visual reconstruction frameworks, are able to capture functional selectivity properties of the ROIs that have been examined in many prior studies in neuroscientific research. Our thesis attempts to bring valuable insights for future studies in neural decoding, visual reconstruction, and neuroscientific exploration using deep learning models by providing the results of two visual reconstruction frameworks and ROI analyses. The findings and contributions of the thesis may help researchers working in cognitive neuroscience and have implications for brain-computer-interface applications
Letheule, Nathan. "Apports de l'Apprentissage Profond pour la simulation d'images SAR." Electronic Thesis or Diss., université Paris-Saclay, 2024. https://theses.hal.science/tel-04651643.
Full textSimulation is a valuable tool for many SAR imaging applications, however, large simulated images are not yet realistic enough to fool a radar image expert. This thesis proposes to evaluate to what extent the use of recent advances in deep learning can improve the quality of simulations. As a first step, we propose to define a method for measuring the realism of simulated SAR images by comparing them with real images. The resulting metrics will then be used to evaluate simulation results. Secondly, two simulation frameworks based on deep learning are proposed, with different philosophies. The first does not take into account physical knowledge of the imagery, and proposes to learn the transformation of an optical image into a radar image using a cGAN architecture. The second is based on a physical simulator developed at Onera (EMPRISE), and uses automatic input generation from semantic segmentation of an optical image of the scene, via deep learning. For this last promising avenue, we are looking into the description of the input and its impact on the final simulation result. Finally, we will be proposing ways of enriching the images generated by the physical simulator using deep learning, in particular through diffusion networks and text-to-image approaches
Fourneaud, Ludovic. "Caractérisation et modélisation des performances hautes fréquences des réseaux d'interconnexions de circuits avancés 3D : application à la réalisation d'imageurs de nouvelle génération." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00819827.
Full textBoumzaid, Yacine. "Etude et construction de schémas de subdivision quasi-linéaires sur des maillages bi-réguliers." Phd thesis, Université de Bourgogne, 2012. http://tel.archives-ouvertes.fr/tel-00905806.
Full text