Contents
Academic literature on the topic 'Translation d'image'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Translation d'image.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Translation d'image"
Amrani, M. El, and A. Safouane. "Restauration d'images ultrasonores en contrôle non destructif." Canadian Journal of Physics 82, no. 8 (August 1, 2004): 661–69. http://dx.doi.org/10.1139/p04-013.
Full textDissertations / Theses on the topic "Translation d'image"
Mayet, Tsiry. "Multi-domain translation in a semi-supervised setting." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMIR46.
Full textThis thesis explores multi-modal generation and semi-supervised learning, addressing two critical challenges: supporting flexible configurations of input and output across multiple domains, and developing efficient training strategies for semi-supervised data settings. As artificial intelligence systems advance, there is growing need for models that can flexibly integrate and generate multiple modalities, mirroring human cognitive abilities. Conventional deep learning systems often struggle when deviating from their training configuration, which occurs when certain modalities are unavailable in real-world applications. For instance, in medical settings, patients might not undergo all possible scans for a comprehensive analysis system. Additionally, obtaining finer control over generated modalities is crucial for enhancing generation capabilities and providing richer contextual information. As the number of domains increases, obtaining simultaneous supervision across all domains becomes increasingly challenging. We focus on multi-domain translation in a semi-supervised setting, extending the classical domain translation paradigm. Rather than addressing specific translation directions or limiting translations to domain pairs, we develop methods facilitating translations between any possible domain configurations, determined at test time. The semi-supervised aspect reflects real-world scenarios where complete data annotation is often infeasible or prohibitively expensive. Our work explores three main areas: (1) studying latent space regularization functions to enhance domain translation learning with limited supervision, (2) examining the scalability and flexibility of diffusion-based translation models, and (3) improving the generation speed of diffusion-based inpainting models. First, we propose LSM, a semi-supervised translation framework leveraging additional input and structured output data to regularize inter-domain and intra-domain dependencies. Second, we develop MDD, a novel diffusion-based multi-domain translation semi-supervised framework. MDD shifts the classical reconstruction loss of diffusion models to a translation loss by modeling different noise levels per domain. The model leverages less noisy domains to reconstruct noisier ones, modeling missing data from the semi-supervised setting as pure noise and enabling flexible configuration of condition and target domains. Finally, we introduce TD-Paint, a novel diffusion-based inpainting model improving generation speed and reducing computational burden. Through investigation of the generation sampling process, we observe that diffusion-based inpainting models suffer from unsynchronized generation and conditioning. Existing models often rely on resampling steps or additional regularization losses to realign condition and generation, increasing time and computational complexity. TD-Paint addresses this by modeling variable noise levels at the pixel level, enabling efficient use of the condition from the generation onset
Wang, Yaxing. "Transferring and learning representations for image generation and translation." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/669579.
Full textLa generación de imágenes es una de las tareas más atractivas, fascinantes y complejas en la visión por computador. De los diferentes métodos para la generación de imágenes, las redes generativas adversarias (o también llamadas ""GANs"") juegan un papel crucial. Los modelos generativos más comunes basados en GANs se pueden dividir en dos apartados. El primero, simplemente llamado generativo, utiliza como entrada ruido aleatorio y sintetiza una imagen que sigue la misma distribución que las imágenes de entrenamiento. En el segundo apartado encontramos la traducción de imagen a imagen, cuyo objetivo consiste en transferir la imagen de un dominio origen a uno que es indistinguible del dominio objetivo. Los métodos de esta categoria de traducción de imagen a imagen se pueden subdividir en emparejados o no emparejados, dependiendo de si se requiere que los datos sean emparejados o no. En esta tesis, el objetivo consiste en resolver algunos de los retos tanto en la generación de imágenes como en la traducción de imagen a imagen. Las GANs dependen en gran parte del acceso a gran cantidad de datos, y fallan al generar imágenes realistas a partir de ruido aleatorio cuando se aplican a dominios con pocas imágenes. Para solucionar este problema, proponemos transferir el conocimiento de un modelo entrenado a partir de un conjunto de datos con muchas imágenes (dominio origen) a uno entrenado con datos limitados (dominio objetivo). Encontramos que tanto las GANs como las GANs condicionales pueden beneficiarse de los modelos entrenados con grandes conjuntos de datos. Nuestros experimentos muestran que transferir el discriminador es más importante que hacerlo para el generador. Usar tanto el generador como el discriminador resulta en un mayor rendimiento. Sin embargo, este método sufre de overfitting, dado que actualizamos todos los parámetros para adaptar el modelo a los datos del objetivo. Para ello proponemos una arquitectura nueva, hecha a medida para resolver la transferencia de conocimiento en el caso de dominios objetivo con muy pocas imágenes. Nuestro método explora eficientemente qué parte del espacio latente está más relacionado con el dominio objetivo. Adicionalmente, el método propuesto es capaz de transferir el conocimiento a partir de múltiples GANs pre-entrenadas. Aunque la traducción de imagen a imagen ha conseguido rendimientos extraordinarios, tiene que enfrentarse a diferentes problemas. Primero, para el caso de la traducción entre dominios complejos (cuyas traducciones son entre diferentes modalidades) se ha observado que los métodos de traducción de imagen a imagen requieren datos emparejados. Demostramos que únicamente cuando algunas de las traducciones disponen de esta información, podemos inferir las traducciones restantes. Proponemos un método nuevo en el cual alineamos diferentes codificadores y decodificadores de imagen de una manera que nos permite obtener la traducción simplemente encadenando el codificador de origen con el decodificador objetivo, aún cuando estos no han interactuado durante la fase de entrenamiento (i.e. sin disponer de dicha información). Segundo, existe el problema del sesgo en la traducción de imagen a imagen. Los conjuntos de datos sesgados inevitablemente contienen cambios no deseados, eso se debe a que el dataset objetivo tiene una distribución visual subyacente. Proponemos el uso de restricciones semánticas cuidadosamente diseñadas para reducir los efectos del sesgo. El uso de la restricción semántica implica la preservación de las propiedades de imagen deseada. Finalmente, los métodos actuales fallan en generar resultados diversos o en realizar transferencia de conocimiento escalables a un único modelo. Para aliviar este problema, proponemos una manera escalable y diversa para la traducción de imagen a imagen. Para ello utilizamos ruido aleatorio para el control de la diversidad. La escalabilidad es determinada a partir del condicionamiento de la etiqueta del dominio.
Image generation is arguably one of the most attractive, compelling, and challenging tasks in computer vision. Among the methods which perform image generation, generative adversarial networks (GANs) play a key role. The most common image generation models based on GANs can be divided into two main approaches. The first one, called simply image generation takes random noise as an input and synthesizes an image which follows the same distribution as the images in the training set. The second class, which is called image-to-image translation, aims to map an image from a source domain to one that is indistinguishable from those in the target domain. Image-to-image translation methods can further be divided into paired and unpaired image-to-image translation based on whether they require paired data or not. In this thesis, we aim to address some challenges of both image generation and image-to-image generation. GANs highly rely upon having access to vast quantities of data, and fail to generate realistic images from random noise when applied to domains with few images. To address this problem, we aim to transfer knowledge from a model trained on a large dataset (source domain) to the one learned on limited data (target domain). We find that both GANs and conditional GANs can benefit from models trained on large datasets. Our experiments show that transferring the discriminator is more important than the generator. Using both the generator and discriminator results in the best performance. We found, however, that this method suffers from overfitting, since we update all parameters to adapt to the target data. We propose a novel architecture, which is tailored to address knowledge transfer to very small target domains. Our approach effectively explores which part of the latent space is more related to the target domain. Additionally, the proposed method is able to transfer knowledge from multiple pretrained GANs. Although image-to-image translation has achieved outstanding performance, it still faces several problems. First, for translation between complex domains (such as translations between different modalities) image-to-image translation methods require paired data. We show that when only some of the pairwise translations have been seen (i.e. during training), we can infer the remaining unseen translations (where training pairs are not available). We propose a new approach where we align multiple encoders and decoders in such a way that the desired translation can be obtained by simply cascading the source encoder and the target decoder, even when they have not interacted during the training stage (i.e. unseen). Second, we address the issue of bias in image-to-image translation. Biased datasets unavoidably contain undesired changes, which are due to the fact that the target dataset has a particular underlying visual distribution. We use carefully designed semantic constraints to reduce the effects of the bias. The semantic constraint aims to enforce the preservation of desired image properties. Finally, current approaches fail to generate diverse outputs or perform scalable image transfer in a single model. To alleviate this problem, we propose a scalable and diverse image-to-image translation. We employ random noise to control the diversity. The scalabitlity is determined by conditioning the domain label.
Hingot, Vincent. "Development of ultrasound localization microscopy to measure cerebral perfusion during stroke : a study in mouse models prior to its translation in humans." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS562.
Full textUltrasonography is a medical imaging technique that uses ultrasound. A typical examination is based on two main modes, B-mode for anatomical imaging and Doppler mode for blood flowimaging. In the context of cerebrovascular diseases, ultrasonography is used primarily to estimate alterations in blood flow in major cerebral arteries through transcranial Doppler. However, the low quality of the images through the skull does not allow ultrasound to be as efficient as magnetic resonance imaging. Recent advances in ultrasound have led to the emergence of new modes of imaging, particularly a super-resolution ultrasound technique that increases the resolution and contrast of vascular imaging. It is based on the rapid imaging of microbubbles commonly used as contrast agents for ultrasound. This method has shown that it can image even the smallest vessels and allows to perform cerebral perfusion imaging more effectively than Transcranial Doppler. This would allow earlier and more effective management of stroke patients. Before being used in a medical context, this ultrasound super-resolution technique must be better understood, better realized, and adapted to the particular context of cerebrovascular diseases. In particular, this manuscript will discuss how to best form images, and will look at the actual performance of super-resolved imaging. We will also discuss the possibilities of correcting artefacts due to physiological movements and the possibilities of using super-resolved imaging in various organs, particularly the kidneys, tumors and spinal cord. Finally, imaging of models of cerebral ischemia in rodents will enable the construction of vascular biomarkers suitable for the diagnosis of cerebrovascular pathologies and should aid translation into human patients