Tesi sul tema "Super learning"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Super learning.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Super learning".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Lindberg, Magnus. "An Imitation-Learning based Agentplaying Super Mario". Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4529.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Context. Developing an Artificial Intelligence (AI) agent that canpredict and act in all possible situations in the dynamic environmentsthat modern video games often consists of is on beforehand nearly im-possible and would cost a lot of money and time to create by hand. Bycreating a learning AI agent that could learn by itself by studying itsenvironment with the help of Reinforcement Learning (RL) it wouldsimplify this task. Another wanted feature that often is required is AIagents with a natural acting behavior and a try to solve that problemcould be to imitating a human by using Imitation Learning (IL). Objectives. The purpose of this investigation is to study if it is pos-sible to create a learning AI agent feasible to play and complete somelevels in a platform game with the combination of the two learningtechniques RL and IL. Methods. To be able to investigate the research question an imple-mentation is done that combines one RL technique and one IL tech-nique. By letting a set of human players play the game their behavioris saved and applied to the agents. The RL is then used to train andtweak the agents playing performance. A couple of experiments areexecuted to evaluate the differences between the trained agents againsttheir respective human teacher. Results. The results of these experiments showed promising indica-tions that the agents during different phases of the experiments hadsimilarly behavior compared to their human trainers. The agents alsoperformed well when comparing them to other already existing ones. Conclusions. To conclude there is promising results of creating dy-namical agents with natural behavior with the combination of RL andIL and that it with additional adjustments would make it performeven better as a learning AI with a more natural behavior.
2

Kumar, Sanjeev. "Priors and learning based methods for super-resolution". Diss., [La Jolla] : University of California, San Diego, 2010. http://wwwlib.umi.com/cr/ucsd/fullcit?p3397852.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (Ph. D.)--University of California, San Diego, 2010.
Title from first page of PDF file (viewed April 14, 2010). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 96-102).
3

Pickup, Lyndsey C. "Machine learning in multi-frame image super-resolution". Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:88c6968f-1e62-4d89-bd70-604bf1f41007.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Multi-frame image super-resolution is a procedure which takes several noisy low-resolution images of the same scene, acquired under different conditions, and processes them together to synthesize one or more high-quality super-resolution images, with higher spatial frequency, and less noise and image blur than any of the original images. The inputs can take the form of medical images, surveillance footage, digital video, satellite terrain imagery, or images from many other sources. This thesis focuses on Bayesian methods for multi-frame super-resolution, which use a prior distribution over the super-resolution image. The goal is to produce outputs which are as accurate as possible, and this is achieved through three novel super-resolution schemes presented in this thesis. Previous approaches obtained the super-resolution estimate by first computing and fixing the imaging parameters (such as image registration), and then computing the super-resolution image with this registration. In the first of the approaches taken here, superior results are obtained by optimizing over both the registrations and image pixels, creating a complete simultaneous algorithm. Additionally, parameters for the prior distribution are learnt automatically from data, rather than being set by trial and error. In the second approach, uncertainty in the values of the imaging parameters is dealt with by marginalization. In a previous Bayesian image super-resolution approach, the marginalization was over the super-resolution image, necessitating the use of an unfavorable image prior. By integrating over the imaging parameters rather than the image, the novel method presented here allows for more realistic prior distributions, and also reduces the dimension of the integral considerably, removing the main computational bottleneck of the other algorithm. Finally, a domain-specific image prior, based upon patches sampled from other images, is presented. For certain types of super-resolution problems where it is applicable, this sample-based prior gives a significant improvement in the super-resolution image quality.
4

Ouyang, Wei. "Deep Learning for Advanced Microscopy". Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC174/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Contexte: La microscopie joue un rôle important en biologie depuis plusieurs siècles, mais sa résolution a longtemps été limitée à environ 250 nm, de sorte que nombre de structures biologiques (virus, vésicules, pores nucléaires, synapses) ne pouvaient être résolues. Au cours de la dernière décennie, plusieurs méthodes de super-résolution ont été développées pour dépasser cette limite. Parmi ces techniques, les plus puissantes et les plus utilisées reposent sur la localisation de molécules uniques (microscopie à localisation de molécule unique, ou SMLM), comme PALM et STORM. En localisant précisément les positions de molécules fluorescentes isolées dans des milliers d'images de basse résolution acquises de manière séquentielle, la SMLM peut atteindre des résolutions de 20 à 50 nm voire mieux. Cependant, cette technique est intrinsèquement lente car elle nécessite l’accumulation d’un très grand nombre d’images et de localisations pour obtenir un échantillonnage super-résolutif des structures fluorescentes. Cette lenteur (typiquement ~ 30 minutes par image super-résolutive) rend difficile l'utilisation de la SMLM pour l'imagerie cellulaire à haut débit ou en cellules vivantes. De nombreuses méthodes ont été proposées pour pallier à ce problème, principalement en améliorant les algorithmes de localisation pour localiser des molécules proches, mais la plupart de ces méthodes compromettent la résolution spatiale et entraînent l’apparition d’artefacts. Méthodes et résultats: Nous avons adopté une stratégie de transformation d’image en image basée sur l'apprentissage profond dans le but de restaurer des images SMLM parcimonieuses et par là d’améliorer la vitesse d’acquisition et la qualité des images super-résolutives. Notre méthode, ANNA-PALM, s’appuie sur des développements récents en apprentissage profond, notamment l’architecture U-net et les modèles génératifs antagonistes (GANs). Nous montrons des validations de la méthode sur des images simulées et des images expérimentales de différentes structures cellulaires (microtubules, pores nucléaires et mitochondries). Ces résultats montrent qu’après un apprentissage sur moins de 10 images de haute qualité, ANNA-PALM permet de réduire le temps d’acquisition d’images SMLM, à qualité comparable, d’un facteur 10 à 100. Nous avons également montré que ANNA-PALM est robuste à des altérations de la structure biologique, ainsi qu’à des changements de paramètres de microscopie. Nous démontrons le potentiel applicatif d’ANNA-PALM pour la microscopie à haut débit en imageant ~ 1000 cellules à haute résolution en environ 3 heures. Enfin, nous avons conçu un outil pour estimer et réduire les artefacts de reconstruction en mesurant la cohérence entre l’image reconstruite et l’image en épi-fluorescence. Notre méthode permet une microscopie super-résolutive plus rapide et plus douce, compatible avec l’imagerie haut débit, et ouvre une nouvelle voie vers l'imagerie super-résolutive des cellules vivantes. La performance des méthodes d'apprentissage profond augmente avec la quantité des données d’entraînement. Le partage d’images au sein de la communauté de microscopie offre en principe un moyen peu coûteux d’augmenter ces données. Cependant, il est souvent difficile d'échanger ou de partager des données de SMLM, car les tables de localisation seules ont souvent une taille de plusieurs gigaoctets et il n'existe pas de plate-forme de visualisation dédiée aux données SMLM. Nous avons développé un format de fichier pour compresser sans perte des tables de localisation, ainsi qu’une plateforme web (https://shareloc.xyz) qui permet de visualiser et de partager facilement des données SMLM 2D ou 3D. A l’avenir, cette plate-forme pourrait grandement améliorer les performances des modèles d'apprentissage en profondeur, accélérer le développement des outils, faciliter la réanalyse des données et promouvoir la recherche reproductible et la science ouverte
Background: Microscopy plays an important role in biology since several centuries, but its resolution has long been limited to ~250nm due to diffraction, leaving many important biological structures (e.g. viruses, vesicles, nuclear pores, synapses) unresolved. Over the last decade, several super-resolution methods have been developed that break this limit. Among the most powerful and popular super-resolution techniques are those based on single molecular localization (single molecule localization microscopy, or SMLM) such as PALM and STORM. By precisely localizing positions of isolated fluorescent molecules in thousands or more sequentially acquired diffraction limited images, SMLM can achieve resolutions of 20-50 nm or better. However, SMLM is inherently slow due to the necessity to accumulate enough localizations to achieve high resolution sampling of the fluorescent structures. The drawback in acquisition speed (typically ~30 minutes per super-resolution image) makes it difficult to use SMLM in high-throughput and live cell imaging. Many methods have been proposed to address this issue, mostly by improving the localization algorithms to localize overlapping spots, but most of them compromise spatial resolution and cause artifacts.Methods and results: In this work, we applied deep learning based image-to-image translation framework for improving imaging speed and quality by restoring information from rapidly acquired low quality SMLM images. By utilizing recent advances in deep learning including the U-net and Generative Adversarial Networks, we developed our method Artificial Neural Network Accelerated PALM (ANNA-PALM) which is capable of learning structural information from training images and using the trained model to accelerate SMLM imaging by tens to hundreds folds. With experimentally acquired images of different cellular structures (microtubules, nuclear pores and mitochondria), we demonstrated that deep learning can efficiently capture the structural information from less than 10 training samples and reconstruct high quality super-resolution images from sparse, noisy SMLM images obtained with much shorter acquisitions than usual for SMLM. We also showed that ANNA-PALM is robust to possible variations between training and testing conditions, due either to changes in the biological structure or to changes in imaging parameters. Furthermore, we take advantage of the acceleration provided by ANNA-PALM to perform high throughput experiments, showing acquisition of ~1000 cells at high resolution in ~3 hours. Additionally, we designed a tool to estimate and reduce possible artifacts is designed by measuring the consistency between the reconstructed image and the experimental wide-field image. Our method enables faster and gentler imaging which can be applied to high-throughput, and provides a novel avenue towards live cell high resolution imaging. Deep learning methods rely on training data and their performance can be improved even further with more training data. One cheap way to obtain more training data is through data sharing within the microscopy community. However, it often difficult to exchange or share localization microscopy data, because localization tables alone are typically several gigabytes in size, and there is no dedicated platform for localization microscopy data which provide features such as rendering, visualization and filtering. To address these issues, we developed a file format that can losslessly compress localization tables into smaller files, alongside with a web platform called ShareLoc (https://shareloc.xyz) that allows to easily visualize and share 2D or 3D SMLM data. We believe that this platform can greatly improve the performance of deep learning models, accelerate tool development, facilitate data re-analysis and further promote reproducible research and open science
5

Yelibi, Lionel. "Introduction to fast Super-Paramagnetic Clustering". Master's thesis, Faculty of Science, 2019. http://hdl.handle.net/11427/31332.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We map stock market interactions to spin models to recover their hierarchical structure using a simulated annealing based Super-Paramagnetic Clustering (SPC) algorithm. This is directly compared to a modified implementation of a maximum likelihood approach to fast-Super-Paramagnetic Clustering (f-SPC). The methods are first applied standard toy test-case problems, and then to a dataset of 447 stocks traded on the New York Stock Exchange (NYSE) over 1249 days. The signal to noise ratio of stock market correlation matrices is briefly considered. Our result recover approximately clusters representative of standard economic sectors and mixed clusters whose dynamics shine light on the adaptive nature of financial markets and raise concerns relating to the effectiveness of industry based static financial market classification in the world of real-time data-analytics. A key result is that we show that the standard maximum likelihood methods are confirmed to converge to solutions within a Super-Paramagnetic (SP) phase. We use insights arising from this to discuss the implications of using a Maximum Entropy Principle (MEP) as opposed to the Maximum Likelihood Principle (MLP) as an optimization device for this class of problems.
6

Bégin, Isabelle. "Camera-independent learning and image quality assessment for super-resolution". Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=102957.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
An increasing number of applications require high-resolution images in situations where the access to the sensor and the knowledge of its specifications are limited. In this thesis, the problem of blind super-resolution is addressed, here defined as the estimation of a high-resolution image from one or more low-resolution inputs, under the condition that the degradation model parameters are unknown. The assessment of super-resolved results, using objective measures of image quality, is also addressed.
Learning-based methods have been successfully applied to the single frame super-resolution problem in the past. However, sensor characteristics such as the Point Spread Function (PSF) must often be known. In this thesis, a learning-based approach is adapted to work without the knowledge of the PSF thus making the framework camera-independent. However, the goal is not only to super-resolve an image under this limitation, but also to provide an estimation of the best PSF, consisting of a theoretical model with one unknown parameter.
In particular, two extensions of a method performing belief propagation on a Markov Random Field are presented. The first method finds the best PSF parameter by performing a search for the minimum mean distance between training examples and patches from the input image. In the second method, the best PSF parameter and the super-resolution result are found simultaneously by providing a range of possible PSF parameters from which the super-resolution algorithm will choose from. For both methods, a first estimate is obtained through blind deconvolution and an uncertainty is calculated in order to restrict the search.
Both camera-independent adaptations are compared and analyzed in various experiments, and a set of key parameters are varied to determine their effect on both the super-resolution and the PSF parameter recovery results. The use of quality measures is thus essential to quantify the improvements obtained from the algorithms. A set of measures is chosen that represents different aspects of image quality: the signal fidelity, the perceptual quality and the localization and scale of the edges.
Results indicate that both methods improve similarity to the ground truth and can in general refine the initial PSF parameter estimate towards the true value. Furthermore, the similarity measure results show that the chosen learning-based framework consistently improves a measure designed for perceptual quality.
7

Jain, Vinit. "Deep Learning based Video Super- Resolution in Computer Generated Graphics". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-292687.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Super-Resolution is a widely studied problem in the field of computer vision, where the purpose is to increase the resolution of, or super-resolve, image data. In Video Super-Resolution, maintaining temporal coherence for consecutive video frames requires fusing information from multiple frames to super-resolve one frame. Current deep learning methods perform video super-resolution, yet most of them focus on working with natural datasets. In this thesis, we use a recurrent back-projection network for working with a dataset of computer-generated graphics, with example applications including upsampling low-resolution cinematics for the gaming industry. The dataset comes from a variety of gaming content, rendered in (3840 x 2160) resolution. The objective of the network is to produce the upscaled version of the low-resolution frame by learning an input combination of a low-resolution frame, a sequence of neighboring frames, and the optical flow between each neighboring frame and the reference frame. Under the baseline setup, we train the model to perform 2x upsampling from (1920 x 1080) to (3840 x 2160) resolution. In comparison against the bicubic interpolation method, our model achieved better results by a margin of 2dB for Peak Signal-to-Noise Ratio (PSNR), 0.015 for Structural Similarity Index Measure (SSIM), and 9.3 for the Video Multi-method Assessment Fusion (VMAF) metric. In addition, we further demonstrate the susceptibility in the performance of neural networks to changes in image compression quality, and the inefficiency of distortion metrics to capture the perceptual details accurately.
Superupplösning är ett allmänt studerat problem inom datorsyn, där syftet är att öka upplösningen på eller superupplösningsbilddata. I Video Super- Resolution kräver upprätthållande av tidsmässig koherens för på varandra följande videobilder sammanslagning av information från flera bilder för att superlösa en bildruta. Nuvarande djupinlärningsmetoder utför superupplösning i video, men de flesta av dem fokuserar på att arbeta med naturliga datamängder. I denna avhandling använder vi ett återkommande bakprojektionsnätverk för att arbeta med en datamängd av datorgenererad grafik, med exempelvis applikationer inklusive upsampling av film med låg upplösning för spelindustrin. Datauppsättningen kommer från en mängd olika spelinnehåll, återgivna i (3840 x 2160) upplösning. Målet med nätverket är att producera en uppskalad version av en ram med låg upplösning genom att lära sig en ingångskombination av en lågupplösningsram, en sekvens av intilliggande ramar och det optiska flödet mellan varje intilliggande ram och referensramen. Under grundinställningen tränar vi modellen för att utföra 2x uppsampling från (1920 x 1080) till (3840 x 2160) upplösning. Jämfört med den bicubiska interpoleringsmetoden uppnådde vår modell bättre resultat med en marginal på 2 dB för Peak Signal-to-Noise Ratio (PSNR), 0,015 för Structural Similarity Index Measure (SSIM) och 9.3 för Video Multimethod Assessment Fusion (VMAF) mätvärde. Dessutom demonstrerar vi vidare känsligheten i neuronal nätverk för förändringar i bildkomprimeringskvaliteten och ineffektiviteten hos distorsionsmätvärden för att fånga de perceptuella detaljerna exakt.
8

Donnot, Benjamin. "Deep learning methods for predicting flows in power grids : novel architectures and algorithms". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS060/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Cette thèse porte sur les problèmes de sécurité sur le réseau électrique français exploité par RTE, le Gestionnaire de Réseau de Transport (GRT). Les progrès en matière d'énergie durable, d'efficacité du marché de l'électricité ou de nouveaux modes de consommation poussent les GRT à exploiter le réseau plus près de ses limites de sécurité. Pour ce faire, il est essentiel de rendre le réseau plus "intelligent". Pour s'attaquer à ce problème, ce travail explore les avantages des réseaux neuronaux artificiels. Nous proposons de nouveaux algorithmes et architectures d'apprentissage profond pour aider les opérateurs humains (dispatcheurs) à prendre des décisions que nous appelons " guided dropout ". Ceci permet de prévoir les flux électriques consécutifs à une modification volontaire ou accidentelle du réseau. Pour se faire, les données continues (productions et consommations) sont introduites de manière standard, via une couche d'entrée au réseau neuronal, tandis que les données discrètes (topologies du réseau électrique) sont encodées directement dans l'architecture réseau neuronal. L’architecture est modifiée dynamiquement en fonction de la topologie du réseau électrique en activant ou désactivant des unités cachées. Le principal avantage de cette technique réside dans sa capacité à prédire les flux même pour des topologies de réseau inédites. Le "guided dropout" atteint une précision élevée (jusqu'à 99% de précision pour les prévisions de débit) tout en allant 300 fois plus vite que des simulateurs de grille physiques basés sur les lois de Kirchoff, même pour des topologies jamais vues, sans connaissance détaillée de la structure de la grille. Nous avons également montré que le "guided dropout" peut être utilisé pour classer par ordre de gravité des évènements pouvant survenir. Dans cette application, nous avons démontré que notre algorithme permet d'obtenir le même risque que les politiques actuellement mises en œuvre tout en n'exigeant que 2 % du budget informatique. Le classement reste pertinent, même pour des cas de réseau jamais vus auparavant, et peut être utilisé pour avoir une estimation globale de la sécurité globale du réseau électrique
This thesis addresses problems of security in the French grid operated by RTE, the French ``Transmission System Operator'' (TSO). Progress in sustainable energy, electricity market efficiency, or novel consumption patterns push TSO's to operate the grid closer to its security limits. To this end, it is essential to make the grid ``smarter''. To tackle this issue, this work explores the benefits of artificial neural networks. We propose novel deep learning algorithms and architectures to assist the decisions of human operators (TSO dispatchers) that we called “guided dropout”. This allows the predictions on power flows following of a grid willful or accidental modification. This is tackled by separating the different inputs: continuous data (productions and consumptions) are introduced in a standard way, via a neural network input layer while discrete data (grid topologies) are encoded directly in the neural network architecture. This architecture is dynamically modified based on the power grid topology by switching on or off the activation of hidden units. The main advantage of this technique lies in its ability to predict the flows even for previously unseen grid topologies. The "guided dropout" achieves a high accuracy (up to 99% of precision for flow predictions) with a 300 times speedup compared to physical grid simulators based on Kirchoff's laws even for unseen contingencies, without detailed knowledge of the grid structure. We also showed that guided dropout can be used to rank contingencies that might occur in the order of severity. In this application, we demonstrated that our algorithm obtains the same risk as currently implemented policies while requiring only 2% of today's computational budget. The ranking remains relevant even handling grid cases never seen before, and can be used to have an overall estimation of the global security of the power grid
9

Kim, Max. "Improving Knee Cartilage Segmentation using Deep Learning-based Super-Resolution Methods". Thesis, KTH, Medicinteknik och hälsosystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297900.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Segmentation of the knee cartilage is an important step for surgery planning and manufacturing patient-specific prostheses. What has been a promising technology in recent years is deep learning-based super-resolution methods that are composed of feed-forward models which have been successfully applied on natural and medical images. This thesis aims to test the feasibility to super-resolve thick slice 2D sequence acquisitions and acquire sufficient segmentation accuracy of the articular cartilage in the knee. The investigated approaches are single- and multi-contrast super-resolution, where the contrasts are either based on the 2D sequence, 3D sequence, or both. The deep learning models investigated are based on predicting the residual image between the high- and low-resolution image pairs, finding the hidden latent features connecting the image pairs, and approximating the end-to-end non-linear mapping between the low- and high-resolution image pairs. The results showed a slight improvement in segmentation accuracy with regards to the baseline bilinear interpolation for the single-contrast super-resolution, however, no notable improvements in segmentation accuracy were observed for the multi-contrast case. Although the multi-contrast approach did not result in any notable improvements, there are still unexplored areas not covered in this work that are promising and could potentially be covered as future work.
Segmentering av knäbrosket är ett viktigt steg för planering inför operationer och tillverkning av patientspecifika proteser. Idag segmenterar man knäbrosk med hjälp av MR-bilder tagna med en 3D-sekvens som både tidskrävande och rörelsekänsligt, vilket kan vara obehagligt för patienten. I samband med 3D-bildtagningar brukar även thick slice 2D-sekvenser tas för diagnostiska skäl, däremot är de inte anpassade för segmentering på grund av för tjocka skivor. På senare tid har djupinlärningsbaserade superupplösningsmetoder uppbyggda av så kallade feed-forwardmodeller visat sig vara väldigt framgångsrikt när det applicerats på verkliga- och medicinska bilder. Syftet med den här rapporten är att testa hur väl superupplösta thick slice 2D-sekvensbildtagningar fungerar för segmentering av ledbrosket i knät. De undersökta tillvägagångssätten är superupplösning av enkel- och flerkontrastbilder, där kontrasten är antingen baserade på 2D-sekvensen, 3D-sekvensen eller både och. Resultaten påvisar en liten förbättring av segmenteringnoggrannhet vid segmentering av enkelkontrastbilderna över baslinjen linjär interpolering. Däremot var det inte någon märkvärdig förbättring i superupplösning av flerkontrastbilderna. Även om superupplösning av flerkontrastmetoden inte gav någon märkbar förbättring segmenteringsresultaten så finns det fortfarande outforskade områden som inte tagits upp i det här arbetet som potentiellt skulle kunna utforskas i framtida arbeten.
10

Ceccarelli, Mattia. "Optimization and applications of deep learning algorithms for super-resolution in MRI". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21694/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The increasing amount of data produced by modern infrastructures requires instruments of analysis more and more precise, quick, and efficient. For these reasons in the last decades, Machine Learning (ML) and Deep Learning (DL) techniques saw exponential growth in publications and research from the scientific community. In this work are proposed two new frameworks for Deep Learning: Byron written in C++, for fast analysis in a parallelized CPU environment, and NumPyNet written in Python, which provides a clear and understandable interface on deep learning tailored around readability. Byron will be tested on the field of Single Image Super-Resolution for NMR imaging of brains (Nuclear Magnetic Resonance) using pre-trained models for x2 and x4 upscaling which exhibit greater performance than most common non-learning-based algorithms. The work will show that the reconstruction ability of DL models surpasses the interpolation of a bicubic algorithm even with images totally different from the dataset in which they were trained, indicating that the generalization abilities of those deep learning models can be sufficient to perform well even on biomedical data, which contain particular shapes and textures. Ulterior studies will focus on how the same algorithms perform with different conditions for the input, showing a large variance between results.
11

Bevilacqua, Marco. "Algorithms for super-resolution of images and videos based on learning methods". Phd thesis, Université Rennes 1, 2014. http://tel.archives-ouvertes.fr/tel-01064396.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
With super-resolution (SR) we refer to a class of techniques that enhance the spatial resolution of images and videos. SR algorithms can be of two kinds: multi-frame methods, where multiple low-resolution images are aggregated to form a unique high-resolution image, and single-image methods, that aim at upscaling a single image. This thesis focuses on developing theory and algorithms for the single-image SR problem. In particular, we adopt the so called example-based approach, where the output image is estimated with machine learning techniques, by using the information contained in a dictionary of image "examples". The examples consist in image patches, which are either extracted from external images or derived from the input image itself. For both kinds of dictionary, we design novel SR algorithms, with new upscaling and dictionary construction procedures, and compare them to state-of-the-art methods. The results achieved are shown to be very competitive both in terms of visual quality of the super-resolved images and computational complexity. We then apply our designed algorithms to the video upscaling case, where the goal is to enlarge the resolution of an entire video sequence. The algorithms, opportunely adapted to deal with this case, are also analyzed in the coding context. The analysis conducted shows that, in specific cases, SR can also be an effective tool for video compression, thus opening new interesting perspectives.
12

Firoiu, Vlad. "Beating the world's best at Super Smash Bros. with deep reinforcement learning". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108984.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 29).
There has been a recent explosion in the capabilities of game-playing artificial intelligence. Many classes of RL tasks, from Atari games to motor control to board games, are now solvable by fairly generic algorithms, based on deep learning, that learn to play from experience with often minimal knowledge of the specific domain of interest. In this work, we will investigate the performance of these methods on Super Smash Bros. Melee (SSBM), a popular multiplayer fighting game. The SSBM environment has complex dynamics and partial observability, making it challenging for man and machine alike. The multiplayer aspect poses an additional challenge, as the vast majority of recent advances in RL have focused on single-agent environments. Nonetheless, we will show that it is possible to train agents that are competitive against and even surpass human professionals, a new result for the video game setting..
by Vlad Firoiu.
S.M.
13

Vassilo, Kyle. "Single Image Super Resolution with Infrared Imagery and Multi-Step Reinforcement Learning". University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1606146042238906.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Trinh, Dinh Hoan. "Denoising and super-resolution for medical images by example-based learning approach". Paris 13, 2013. http://scbd-sto.univ-paris13.fr/secure/edgalilee_th_2013_trinh.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
L’objectif de cette thèse est d’élaborer des méthodes efficaces pour le débruitage et la super-résolution afin d’améliorer la qualité et la résolution spatiale des images médicales. En particulier, nous sommes motivés par le challenge d’intégrer le problème de débruitage et de super-résolution dans la même formulation. Nos méthodes utilisent des images standards ou d’exemples localisées à proximité de l’image considérée pour le débruitage et/ou pour la super-résolution. Pour le problème de débruitage, nous introduisons trois nouvelles méthodes qui permettent de réduire certains bruits couramment trouvés sur les images médicales. La première méthode est construite sur la base de la Régression Rigide à noyau. Cette méthode peut être appliquée au bruit Gaussien et au bruit Ricien. Pour la deuxième méthode, le débruitage est effectué par le modèle de régression construit sur les K-plus proches voisins. Cette méthode peut être utilisée pour réduire le bruit Gaussien et le bruit Poisson. Nous proposons dans la troisième méthode, un modèle de représentation parcimonieuse pour éliminer le bruit Gaussian sur des images CT à faible dose. Les méthodes de débruitage proposées sont compétitives avec les approches existantes. Pour la super-résolution, nous proposons deux nouvelles méthodes mono-image basées d’exemples. La première méthode est une méthode géométrique par projection sur l’enveloppe convexe. Pour la deuxième méthode, la super-résolution est effectuée via un modèle de représentation parcimonieuse. Les résultats expérimentaux obtenus montrent que les méthodes proposées sont très efficaces pour les images médicales qui sont souvent affectées par les bruits.
15

Ribeiro, Eduardo Ferreira. "Exploring Transfer Learning via Convolutional Neural Networks for Image Classification and Super-Resolution". Universidade de Salzburg, 2018. http://hdl.handle.net/11612/1009.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This work presents my research about the use of Convolutional Neural Network (CNNs) for transfer learning through its application for colonic polyp classification and iris super-resolution. Traditionally, machine learning methods use the same feature space and the same distribution for training and testing the tools. Several problems in this approach can emerge as, for example, when the number of samples for training (especially in a supervised training) is limited. In the medical field, this problem is recurrent mainly because obtaining a database large enough with appropriate annotations for training is highly costly and may become impractical. Another problem relates to the distribution of textural features in a image database which may be too large such as the texture patterns of the human iris. In this case a single and specific training database might not get enough generalization to be applied to the entire domain. In this work we explore the use of texture transfer learning to surpass these problems for two applications: colonic polyp classification and iris super-resolution. The leading cause of deaths related to intestinal tract is the development of cancer cells (polyps) in its many parts. An early detection (when the cancer is still at an early stage) can reduce the risk of mortality among these patients. More specifically, colonic polyps (benign tumors or growths which arise on the inner colon surface) have a high occurrence and are known to be precursors of colon cancer development. Several studies have shown that automatic detection and classification of image regions which may contain polyps within the colon can be used to assist specialists in order to decrease the polyp miss rate. However, the classification can be a difficult task due to several factors such as the lack or excess of illumination, the blurring due to movement or water injection and the different appearances of polyps. Also, to find a robust and a global feature extractor that summarizes and represents all these pit-patterns structures in a single vector is very difficult and Deep Learning can be a good alternative to surpass these problems. One of the goals of this work is show the effectiveness of CNNs trained from scratch for colonic polyp classification besides the capability of knowledge transfer between natural images and medical images using off-the-shelf pretrained CNNs for colonic polyp classification. In this case, the CNN will project the target database samples into a vector space where the classes are more likely to be separable. The second part of this work dedicates to the transfer learning for iris super-resolution. The main goal of Super-Resolution (SR) is to produce, from one or more images, an image with a higher resolution (with more pixels) at the same time that produces a more detailed and realistic image being faithful to the low resolution image(s). Currently, most iris recognition systems require the user to present their iris for the sensor at a close distance. However, at present, there is a constant pressure to make that relaxed conditions of acquisitions in such systems could be allowed. In this work we show that the use of deep learning and transfer learning for single image super resolution applied to iris recognition can be an alternative for Iris Recognition of low resolution images. For this purpose, we explore if the nature of the images as well as if the pattern from the iris can influence the CNN transfer learning and, consequently, the results in the recognition process.
Diese Arbeit pr¨asentiert meine Forschung hinsichtlich der Verwendung von ”Transfer-Learning” (TL) in Kombination mit Convolutional Neural Networks (CNNs), um dadurch die Klassifikation von Dickdarmpolypen und die Qualit¨at von Iris Bildern (”Iris-Super-Resolution”) zu verbessern. Herk¨ommlicherweise verwenden Verfahren des maschinellen Lernens den gleichen Merkmalsraum und die gleiche Verteilung zum Trainieren und Testen der abgewendeten Methoden. Mehrere Probleme k¨onnen bei diesem Ansatz jedoch auftreten. Zum Beispiel ist es m¨ oglich, dass die Anzahl der zu trainierenden Daten (insbesondere in einem ”supervised training” Szenario) begrenzt ist. Im Speziellen im medizinischen Anwendungsfall ist man regelm¨aßig mit dem angesprochenen Problem konfrontiert, da die Zusammenstellung einer Datenbank, welche ¨ uber eine geeignete Anzahl an verwendbaren Daten verf ¨ ugt, entweder sehr kostspielig ist und/oder sich als ¨ uber die Maßen zeitaufw¨andig herausstellt. Ein anderes Problem betrifft die Verteilung von Strukturmerkmalen in einer Bilddatenbank, die zu groß sein kann, wie es im Fall der Verwendung von Texturmustern der menschlichen Iris auftritt. Dies kann zu dem Umstand f ¨ uhren, dass eine einzelne und sehr spezifische Trainingsdatenbank m¨oglicherweise nicht ausreichend verallgemeinert wird, um sie auf die gesamte betrachtete Dom¨ane anzuwenden. In dieser Arbeit wird die Verwendung von TL auf diverse Texturen untersucht, um die zuvor angesprochenen Probleme f ¨ ur zwei Anwendungen zu ¨ uberwinden: in der Klassifikation von Dickdarmpolypen und in Iris Super-Resolution. Die Hauptursache f ¨ ur Todesf¨alle im Zusammenhang mit dem Darmtrakt ist die Entwicklung von Krebszellen (Polypen) in vielen unterschiedlichen Auspr¨agungen. Eine Fr ¨uherkennung kann das Mortalit¨atsrisiko bei Patienten verringern, wenn sich der Krebs noch in einem fr ¨uhen Stadium befindet. Genauer gesagt, Dickdarmpolypen (gutartige Tumore oder Wucherungen, die an der inneren Dickdarmoberfl¨ache entstehen) haben ein hohes Vorkommen und sind bekanntermaßen Vorl¨aufer von Darmkrebsentwicklung. Mehrere Studien haben gezeigt, dass die automatische Erkennung und Klassifizierung von Bildregionen, die Polypen innerhalb des Dickdarms m¨oglicherweise enthalten, verwendet werden k¨onnen, um Spezialisten zu helfen, die Fehlerrate bei Polypen zu verringern. Die Klassifizierung kann sich jedoch aufgrund mehrerer Faktoren als eine schwierige Aufgabe herausstellen. ZumBeispiel kann das Fehlen oder ein U¨ bermaß an Beleuchtung zu starken Problemen hinsichtlich der Kontrastinformation der Bilder f ¨ uhren, wohingegen Unsch¨arfe aufgrund von Bewegung/Wassereinspritzung die Qualit¨at des Bildmaterials ebenfalls verschlechtert. Daten, welche ein unterschiedlich starkes Auftreten von Polypen repr¨asentieren, bieten auch dieM¨oglichkeit zu einer Reduktion der Klassifizierungsgenauigkeit. Weiters ist es sehr schwierig, einen robusten und vor allem globalen Feature-Extraktor zu finden, der all die notwendigen Pit-Pattern-Strukturen in einem einzigen Vektor zusammenfasst und darstellt. Um mit diesen Problemen ad¨aquat umzugehen, kann die Anwendung von CNNs eine gute Alternative bieten. Eines der Ziele dieser Arbeit ist es, die Wirksamkeit von CNNs, die von Grund auf f ¨ ur die Klassifikation von Dickdarmpolypen konstruiert wurden, zu zeigen. Des Weiteren soll die Anwendung von TL unter der Verwendung vorgefertigter CNNs f ¨ ur die Klassifikation von Dickdarmpolypen untersucht werden. Hierbei wird zus¨atzliche Information von nichtmedizinischen Bildern hinzugezogen und mit den verwendeten medizinischen Daten verbunden: Information wird also transferiert - TL entsteht. Auch in diesem Fall projiziert das CNN iii die Zieldatenbank (die Polypenbilder) in einen vorher trainierten Vektorraum, in dem die zu separierenden Klassen dann eher trennbar sind, daWissen aus den nicht-medizinischen Bildern einfließt. Der zweite Teil dieser Arbeit widmet sich dem TL hinsichtlich der Verbesserung der Bildqualit¨at von Iris Bilder - ”Iris- Super-Resolution”. Das Hauptziel von Super-Resolution (SR) ist es, aus einem oder mehreren Bildern gleichzeitig ein Bild mit einer h¨oheren Aufl¨osung (mit mehr Pixeln) zu erzeugen, welches dadurch zu einem detaillierteren und somit realistischeren Bild wird, wobei der visuelle Bildinhalt unver¨andert bleibt. Gegenw¨artig fordern die meisten Iris- Erkennungssysteme, dass der Benutzer seine Iris f ¨ ur den Sensor in geringer Entfernung pr¨asentiert. Jedoch ist es ein Anliegen der Industrie die bisher notwendigen Bedingungen - kurzer Abstand zwischen Sensor und Iris, sowie Verwendung von sehr teuren hochqualitativen Sensoren - zu ver¨andern. Diese Ver¨anderung betrifft einerseits die Verwendung von billigeren Sensoren und andererseits die Vergr¨oßerung des Abstandes zwischen Iris und Sensor. Beide Anpassungen f ¨ uhren zu Reduktion der Bildqualit¨at, was sich direkt auf die Erkennungsgenauigkeit der aktuell verwendeten Iris- erkennungssysteme auswirkt. In dieser Arbeit zeigen wir, dass die Verwendung von CNNs und TL f ¨ ur die ”Single Image Super-Resolution”, die bei der Iriserkennung angewendet wird, eine Alternative f ¨ ur die Iriserkennung von Bildern mit niedriger Aufl¨osung sein kann. Zu diesem Zweck untersuchen wir, ob die Art der Bilder sowie das Muster der Iris das CNN-TL beeinflusst und folglich die Ergebnisse im Erkennungsprozess ver¨andern kann.
16

Mathari, Bakthavatsalam Pagalavan. "Hardware Acceleration of a Neighborhood Dependent Component Feature Learning (NDCFL) Super-Resolution Algorithm". University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1366034621.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

BORDONE, MOLINI ANDREA. "Deep learning for inverse problems in remote sensing: super-resolution and SAR despeckling". Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2903492.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Polini, Elena. "Super Resolution di immagini con reti neurali convoluzionali". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18502/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
L'argomento principale di questa tesi è il problema di super risoluzione applicato all'ambito dell'imaging. Lo scopo è quello di fornire una panoramica generale sul problema e sugli approcci possibili, e di presentare dei modelli di soluzione basati sull'uso di reti neurali arti�ficiali.
19

Sargent, Garrett Craig. "Single-Image Super-Resolution via Regularized Extreme Learning Regression for Imagery from Microgrid Polarimeters". University of Dayton / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1492782713231794.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Sha, Feng. "Advanced Evolutionary Computation and Deep Learning for Real-Time Video Target Tracking and Image Processing". Thesis, The University of Sydney, 2020. https://hdl.handle.net/2123/21992.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Over the past few decades, new commercial markets and user requirements of video and image processing have increased dramatically. Target detection with accurate trajectory in real-time settings and high-resolution image performance are now essential in the computer vision domain. Meanwhile, evolutionary algorithms also play an important role in the optimisation and estimation of video target tracking and image processing. Some effects and challenges need to be addressed within videos and images to provide precise results in complex and changeable conditions. The aim of this research is to develop the potential of new and advanced evolutionary computation methodologies while offering concise, accurate and real-time image processing outcomes. Further, a new deep-learning approach has been proposed with excellent results to solve image super-resolution problems and blur target challenges in object moving estimation. In the first part of thesis, two particle swarm optimisation (PSO) based methods were designed and developed to achieve higher performance in calculating random object movement. One method optimises particle quality and its diversity in iteration movement with dynamic inertia weight value. The other method expands a particle’s flexibility and search domain by inheriting useful information from the previous solution to perform the dynamic particle movement. Both these approaches have been shown to improve performance, speed and ability to handle multiple moving patterns compared to other traditional methods from the testing experiments. The second part of thesis focused on multi-swarm algorithms. The interactive information exchange among multiple swarms will increase computation capability for solution-searching. Further, it will improve result quality over a single swarm. Two multi-swarm approaches were developed and tested in this research stage. The first approach is a novel multiple particle swarm with dynamic convergence approach for object tracking in complicated environments. This new approach absorbs the advantages of other multi-swarm algorithms to optimise the resources and process iteration. In addition, multiple independent populations will inherit each of their own attribute’s weights through dynamic range convergence and influence one another’s solution effects. The second multi-swarm approach distributes multiple swarms in organised grids of desired search space and dimensions to produce more accurate outcomes and wider search domains to overcome existing tracking challenges and interferences. Compared to single swarm methods in experiments, multi-swarm structure has been demonstrated to require fewer resources and iteration processes with improved performance of target focusing and retrieval. In third part, evolutionary algorithms could work to select a more representative target definition in invariant-feature optimisation. Different from the first and second stages, which focused on using swarm intelligence as a search strategy in video tracking, and instead of colour histogram representation, the proposed discrete dynamic swarm optimisation approach aims to provide a richer target description to overcome the challenges like illumination changes, background noise, deformation and occlusions. Further, the approach will integrate with a complementary procedure to eliminate inappropriate feature points geographically. According to the experiment results, target estimation can more flexibly manage various kinds of tracking challenges and provide stable detection in complicated conditions. In final part, to provide more accurate tracking performance and especially in blurry challenge, convolutional neural network is considered to solve problems associated with low-resolution images and videos through its excellent performance and recent impact in computer vision society. In this stage, a novel deep parallel residual network (DPRN) was proposed. It has been proven a fast and efficient algorithm to solve image super-resolution problems using residual learning. It demonstrated significant advantages in accuracy and speed factors compared with other approaches, with its deeper convolutional network structure, more accurate image reconstruction and real-time model execution. The proposed approach categorises layers in branches and increases the number of layers to 35 with parallel local residual learning. Further, it adapts the Adam optimiser, which can enhance the proposed system to achieve faster training speeds and improved image quality. The experiments were conducted using standard benchmark datasets such as Set5, Set14, BSDS100 and Urban100 and compared with current state-of-the-art approaches. The results indicate that the proposed DPRN can provide higher super-resolution quality with real-time execution (27.18 fps average) when compared with different state-of-the-art algorithms. Based on the useful DPRN super-resolution approach, small and blurry targets in low-resolution videos will perform better than tracking against different noises in higher-resolution environments.
21

Rezio, Ana Carolina Correia 1986. "Super-resolução de imagens baseada em aprendizado utilizando descritores de características". [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275719.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Orientador: Hélio Pedrini
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-19T11:40:39Z (GMT). No. of bitstreams: 1 Rezio_AnaCarolinaCorreia_M.pdf: 2407538 bytes, checksum: cbf48e9214024f2478edcaa47e002852 (MD5) Previous issue date: 2011
Resumo: Atualmente, há uma crescente demanda por imagens de alta resolução em diversos domínios de conhecimento, como sensoriamento remoto, medicina, automação industrial, microscopia, entre outros. Imagens de alta resolução fornecem detalhes que são importantes para as tarefas de análise e visualização dos dados presentes nas imagens. Entretanto, devido ainda ao custo elevado dos sensores de alta precisão e às limitações existentes para redução do tamanho dos pixels das imagens encontradas no próprio sensor, as imagens de alta resolução têm sido adquiridas a partir de métodos de super-resolução. Este trabalho propõe um método para super-resolver uma imagem ou uma sequência de imagens a partir da compensação residual aprendida pelas características extraídas na imagem residual e no conjunto de treinamento. Resultados experimentais mostram que, na maioria casos, o método proposto provê menores erros quando comparado com outras abordagens do estado da arte. Medidas quantitativas e qualitativas são utilizadas na comparação dos resultados obtidos com as técnicas de super-resolução consideradas nos experimentos
Abstract: There is currently a growing demand for high-resolution images in several domains of knowledge, such as remote sensing, medicine, industrial automation, microscopy, among others. High resolution images provide details that are important to tasks of analysis and visualization of data present in the images. However, due to the cost of high precision sensors and the limitations that exist for reducing the size of the image pixels in the sensor itself, high-resolution images have been acquired from super-resolution methods This work proposes a method for super-resolving an image or a sequence of images from the compensation residual learned by the features extracted in the residual image and the training set. The results are compared with some methods available in the literature. Quantitative and qualitative measures are used to compare the results obtained with the super-resolution techniques considered in the experiments
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
22

Gawande, Saurabh. "Generative adversarial networks for single image super resolution in microscopy images". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230188.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Image Super resolution is a widely-studied problem in computer vision, where the objective is to convert a lowresolution image to a high resolution image. Conventional methods for achieving super-resolution such as image priors, interpolation, sparse coding require a lot of pre/post processing and optimization. Recently, deep learning methods such as convolutional neural networks and generative adversarial networks are being used to perform super-resolution with results competitive to the state of the art but none of them have been used on microscopy images. In this thesis, a generative adversarial network, mSRGAN, is proposed for super resolution with a perceptual loss function consisting of a adversarial loss, mean squared error and content loss. The objective of our implementation is to learn an end to end mapping between the low / high resolution images and optimize the upscaled image for quantitative metrics as well as perceptual quality. We then compare our results with the current state of the art methods in super resolution, conduct a proof of concept segmentation study to show that super resolved images can be used as a effective pre processing step before segmentation and validate the findings statistically.
Image Super-resolution är ett allmänt studerad problem i datasyn, där målet är att konvertera en lågupplösningsbild till en högupplöst bild. Konventionella metoder för att uppnå superupplösning som image priors, interpolation, sparse coding behöver mycket föroch efterbehandling och optimering.Nyligen djupa inlärningsmetoder som convolutional neurala nätverk och generativa adversariella nätverk är användas för att utföra superupplösning med resultat som är konkurrenskraftiga mot toppmoderna teknik, men ingen av dem har använts på mikroskopibilder. I denna avhandling, ett generativ kontradiktorisktsnätverk, mSRGAN, är föreslås för superupplösning med en perceptuell förlustfunktion bestående av en motsatt förlust, medelkvadratfel och innehållförlust.Mål med vår implementering är att lära oss ett slut på att slut kartläggning mellan bilder med låg / hög upplösning och optimera den uppskalade bilden för kvantitativa metriks såväl som perceptuell kvalitet. Vi jämför sedan våra resultat med de nuvarande toppmoderna metoderna i superupplösning, och uppträdande ett bevis på konceptsegmenteringsstudie för att visa att superlösa bilder kan användas som ett effektivt förbehandling steg före segmentering och validera fynden statistiskt.
23

Sebastiani, Andrea. "Deep Plug-and-Play Gradient Method for Super-Resolution". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20619/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In diversi settori sono necessarie immagini ad alta risoluzione. Per risoluzione non intendiamo solamente le dimensioni spaziali in pixel dell'immagine ma anche la qualità dell'immagine stessa, priva di distorsioni e/o rumore. Solitamente immagini di questo tipo si possono ottenere usando apparecchiature di acquisizione munite di sensori ad alta precisione, in grado di convertire in digitale i dati analogici. Spesso, per limitazioni fisiche ed economiche, la qualità raggiunta dagli strumenti è ben lontana da quella richiesta per le varie applicazioni. Per risolvere questo problema sono state sviluppate numeroso tecniche comunemente dette di Super-Resolution. Lo scopo di questi metodi è di ricostruire un'immagine in alta risoluzione (HR) da un'immagine acquisita ad una risoluzione più bassa (LR). Sono due gli obiettivi principali di questa tesi. Il primo è quello di studiare e valutare come è possibile combinare tecniche classiche con le recenti innovazioni nell'ambito del deep learning applicato alla ricostruzione di immagini. Il secondo è quello di estendere una classe di metodi detti plug-and-play, introducendo un regolarizzatore delle derivate. Per questi motivi, abbiamo deciso di chiamare Deep Plug-and-Play Gradient Method l'algoritmo risultante da questo lavoro di ricerca. Vogliamo puntualizzare che il metodo proposto può essere utilizzato in molti problemi che hanno una formulazione matematica simile al problema di super-risoluzione. In questa tesi abbiamo preferito concentrarci ed implementare solamente una versione per la Super-Resolution.
24

Zandavi, Seid Miad. "Indoor Autonomous Flight Using Deep Learning-Based Image Understanding Techniques". Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/22893.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Indoor autonomous flight using artificial intelligence (AI) and machine learning techniques is presented. Flying inside a building without a positioning system requires a particular framework to connect computer vision, machine learning, control theory, and AI. The framework consists of six modules/disciplines presented to support indoor autonomous flight: optimization, state estimation, control, object detection, deep learning, and guidance. In this regard, the mathematical model of a quadcopter/drone is derived from an accurate model with a high level of fidelity by considering the non-linearity, uncertainties, and coupling. For the optimization module, a new heuristic optimization algorithm is designed to solve nonlinear optimization problems. The proposed algorithm utilizes a stochastic method to reach the optimal point based on simplex techniques. Swarm simplexes are distributed stochastically in the search space to locate the best optimal point. The designed algorithm is applied to 25 well-known benchmarks, and its performance is compared with Particle Swarm Optimization (PSO), the Nelder-Mead simplex algorithm, and the Grey Wolf Optimizer (GWO), both on its own and in hybrid forms where it is combined with either pattern search (hGWO-PS) or random exploratory search algorithms (hGWO-RES). The numerical results show that the presented algorithm, called Stochastic Dual Simplex Algorithm (SDSA), exhibits competitive performance in terms of accuracy and complexity. This feature makes SDSA efficient for tuning hyper-parameters and achieving the optimal weights of the reconstructed layer in deep learning modules. For the second filter module, a novel filter for nonlinear system state estimation is represented. This new filter formulates the state estimation problem as a stochastic dynamic optimization problem and utilizes a new stochastic method based on a genetic algorithm to find and track the best estimation. The experimental results show that the performance of the proposed filter, named Genetic Filter (GF), is competitive in comparison to that of classical and heuristic filters. GF is implemented to estimate unknown parameters required for the control. For the third control module, a new Proportional-Integral-Derivative-Accelerated (PIDA) control with a derivative filter was designed to improve quadcopter flight stability in a noisy environment. SDSA tunes the proposed PIDA controller associated with the objective of controlling. The simulation results show that the proposed control scheme is able to track the desired point in the presence of disturbances. Thus, the desired point is generated by extracting contextual information from images. For the fourth feature selection module, a novel multi-region feature-selection method is proposed to define histogram values of basic areas and random areas, from which it combines with continuous ant colony filter detection to represent the original target. The presented approach also achieves smooth tracking on different video sequences, especially with the motion blur problem. Both target recognition and tracking of the dynamic target are critical features for the autonomous drone. The experiment result demonstrates better and faster tracking abilities regarding traditional methods. The quality of the image is the crucial requirement to support high performance. Finally, the deep learning and guidance module issue commands to the system for actions. Improving the image resolution can enhance the performance of the image processing module’s tasks, such as object tracking, object detection, and depth detection. A new method, called a post-trained convolutional neural network (CNN), is proposed to increase the accuracy of current state-of-the-art single image super-resolution (SISR) methods. This method utilizes contextual information to update the last reconstruction layer of CNN using SDSA. The drone utilizes high-quality images to identify the target and estimate the relative distance. The estimated distance passes through the guidance low (i.e., pure proportional navigation (PPN)) to generate acceleration commands. The simulation results show that adapting the deep learning-based image understanding techniques (i.e., RetinaNet ant colony detection and Pyramid Stereo Matching Network (PSMNet)) into the proposed controller enables the drone to generate and track the desired point in the presence of disturbances in the complex environment.
25

Nilsson, Erik. "Super-Resolution for Fast Multi-Contrast Magnetic Resonance Imaging". Thesis, Umeå universitet, Institutionen för fysik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160808.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
There are many clinical situations where magnetic resonance imaging (MRI) is preferable over other imaging modalities, while the major disadvantage is the relatively long scan time. Due to limited resources, this means that not all patients can be offered an MRI scan, even though it could provide crucial information. It can even be deemed unsafe for a critically ill patient to undergo the examination. In MRI, there is a trade-off between resolution, signal-to-noise ratio (SNR) and the time spent gathering data. When time is of utmost importance, we seek other methods to increase the resolution while preserving SNR and imaging time. In this work, I have studied one of the most promising methods for this task. Namely, constructing super-resolution algorithms to learn the mapping from a low resolution image to a high resolution image using convolutional neural networks. More specifically, I constructed networks capable of transferring high frequency (HF) content, responsible for details in an image, from one kind of image to another. In this context, contrast or weight is used to describe what kind of image we look at. This work only explores the possibility of transferring HF content from T1-weighted images, which can be obtained quite quickly, to T2-weighted images, which would take much longer for similar quality. By doing so, the hope is to contribute to increased efficacy of MRI, and reduce the problems associated with the long scan times. At first, a relatively simple network was implemented to show that transferring HF content between contrasts is possible, as a proof of concept. Next, a much more complex network was proposed, to successfully increase the resolution of MR images better than the commonly used bicubic interpolation method. This is a conclusion drawn from a test where 12 participants were asked to rate the two methods (p=0.0016) Both visual comparisons and quality measures, such as PSNR and SSIM, indicate that the proposed network outperforms a similar network that only utilizes images of one contrast. This suggests that HF content was successfully transferred between images of different contrasts, which improves the reconstruction process. Thus, it could be argued that the proposed multi-contrast model could decrease scan time even further than what its single-contrast counterpart would. Hence, this way of performing multi-contrast super-resolution has the potential to increase the efficacy of MRI.
26

Castillo, Araújo Victor. "Ensembles of Single Image Super-Resolution Generative Adversarial Networks". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-290945.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Generative Adversarial Networks have been used to obtain state-of-the-art results for low-level computer vision tasks like single image super-resolution, however, they are notoriously difficult to train due to the instability related to the competing minimax framework. Additionally, traditional ensembling mechanisms cannot be effectively applied with these types of networks due to the resources they require at inference time and the complexity of their architectures. In this thesis an alternative method to create ensembles of individual, more stable and easier to train, models by using interpolations in the parameter space of the models is found to produce better results than those of the initial individual models when evaluated using perceptual metrics as a proxy of human judges. This method can be used as a framework to train GANs with competitive perceptual results in comparison to state-of-the-art alternatives.
Generative Adversarial Networks (GANs) har använts för att uppnå state-of-the- art resultat för grundläggande bildanalys uppgifter, som generering av högupplösta bilder från bilder med låg upplösning, men de är notoriskt svåra att träna på grund av instabiliteten relaterad till det konkurrerande minimax-ramverket. Dessutom kan traditionella mekanismer för att generera ensembler inte tillämpas effektivt med dessa typer av nätverk på grund av de resurser de behöver vid inferenstid och deras arkitekturs komplexitet. I det här projektet har en alternativ metod för att samla enskilda, mer stabila och modeller som är lättare att träna genom interpolation i parameterrymden visat sig ge bättre perceptuella resultat än de ursprungliga enskilda modellerna och denna metod kan användas som ett ramverk för att träna GAN med konkurrenskraftig perceptuell prestanda jämfört med toppmodern teknik.
27

Pham, Chi-Hieu. "Apprentisage profond pour la super-résolution et la segmentation d'images médicales". Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0124/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
L'objectif de cette thèse est d'étudier le comportement de différentes représentations d'images, notamment apprentissage profond, dans le contexte d'application en imagerie médicale. Le but est de développer une méthode unifiée efficace pour les applications visées que sont la super résolution, la segmentation et la synthèse. La super-résolution est un procès d'estimation d'une image haute-résolution à partir d'une ou plusieurs images basses résolutions. Dans cette thèse, nous nous concentrons sur la super résolutionunique, c'est-à-dire que l'image haute résolution (HR) est estimée par une image basse-résolution (LR) correspondante. Augmenter la résolution de l'image grâce à la super-résolution est la clé d'une compréhension plus précise de l'anatomie. L'application de la super résolution permet d'obtenir des cartes de segmentation plus précises. Étant donné que deux bases de données qui contiennent les images différentes (par exemple, les images d'IRM et les images de CT), la synthèse est un procès d'estimation d'une image qui est approximative aux images dans la base de données de cible à partir d'une image de la base de données de source. Parfois, certains contrastes tissulaires ne peuvent pas être acquis pendant la séance d'imagerie en raison du temps et des coûts élevés ou de l'absence d'appareils. Une solution possible est à utiliser des méthodes de synthèse d'images médicales pour générer les images avec le contraste différent qui est manquée dans le domaine à cible à partir de l'image du domaine donnée. L'objectif des images synthétiques est d'améliorer d'autres étapes du traitement automatique des images médicales telles que la segmentation, la super-résolution ou l'enregistrement. Dans cette thèse, nous proposons les réseaux neurones pour la super résolutionet la synthèse d'image médicale. Les résultats démontrent le potentiel de la méthode que nous proposons en ce qui concerne les applications médicales pratiques
In this thesis, our motivation is dedicated to studying the behaviors of different image representations and developing a method for super-resolution, cross-modal synthesis and segmentation of medical imaging. Super-Resolution aims to enhance the image resolution using single or multiple data acquisitions. In this work, we focus on single image super-resolution (SR) that estimates the high-resolution (HR) image from one corresponding low-resolution (LR) image. Increasing image resolution through SR is a key to more accurate understanding of the anatomy. The applications of super-resolution have been shown that applying super-resolution techniques leads to more accurate segmentation maps. Sometimes, certain tissue contrasts may not be acquired during the imaging session because of time-consuming, expensive costor lacking of devices. One possible solution is to use medical image cross-modal synthesis methods to generate the missing subject-specific scans in the desired target domain from the given source image domain. The objective of synthetic images is to improve other automatic medical image processing steps such as segmentation, super-resolution or registration. In this thesis, convolutional neural networks are applied to super-resolution and cross-modal synthesis in the context of supervised learning. In addition, an attempt to apply generative adversarial networks for unpaired cross-modal synthesis brain MRI is described. Results demonstrate the potential of deep learning methods with respect to practical medical applications
28

Peyrard, Clément. "Single image super-resolution based on neural networks for text and face recognition". Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI083/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Cette thèse porte sur les méthodes de super-résolution (SR) pour l’amélioration des performances des systèmes de reconnaissance automatique (OCR, reconnaissance faciale). Les méthodes de Super-Résolution (SR) permettent de générer des images haute résolution (HR) à partir d’images basse résolution (BR). Contrairement à un rééchantillonage par interpolation, elles restituent les hautes fréquences spatiales et compensent les artéfacts (flou, crénelures). Parmi elles, les méthodes d’apprentissage automatique telles que les réseaux de neurones artificiels permettent d’apprendre et de modéliser la relation entre les images BR et HR à partir d’exemples. Ce travail démontre l’intérêt des méthodes de SR à base de réseaux de neurones pour les systèmes de reconnaissance automatique. Les réseaux de neurones à convolutions sont particulièrement adaptés puisqu’ils peuvent être entraînés à extraire des caractéristiques non-linéaires bidimensionnelles pertinentes tout en apprenant la correspondance entre les espaces BR et HR. Sur des images de type documents, la méthode proposée permet d’améliorer la précision en reconnaissance de caractère de +7.85 points par rapport à une simple interpolation. La création d’une base d’images annotée et l’organisation d’une compétition internationale (ICDAR2015) ont souligné l’intérêt et la pertinence de telles approches. Pour les images de visages, les caractéristiques faciales sont cruciales pour la reconnaissance automatique. Une méthode en deux étapes est proposée dans laquelle la qualité de l’image est d’abord globalement améliorée, pour ensuite se focaliser sur les caractéristiques essentielles grâce à des modèles spécifiques. Les performances d’un système de vérification faciale se trouvent améliorées de +6.91 à +8.15 points. Enfin, pour le traitement d’images BR en conditions réelles, l’utilisation de réseaux de neurones profonds permet d’absorber la variabilité des noyaux de flous caractérisant l’image BR, et produire des images HR ayant des statistiques naturelles sans connaissance du modèle d’observation exact
This thesis is focussed on super-resolution (SR) methods for improving automatic recognition system (Optical Character Recognition, face recognition) in realistic contexts. SR methods allow to generate high resolution images from low resolution ones. Unlike upsampling methods such as interpolation, they restore spatial high frequencies and compensate artefacts such as blur or jaggy edges. In particular, example-based approaches learn and model the relationship between low and high resolution spaces via pairs of low and high resolution images. Artificial Neural Networks are among the most efficient systems to address this problem. This work demonstrate the interest of SR methods based on neural networks for improved automatic recognition systems. By adapting the data, it is possible to train such Machine Learning algorithms to produce high-resolution images. Convolutional Neural Networks are especially efficient as they are trained to simultaneously extract relevant non-linear features while learning the mapping between low and high resolution spaces. On document text images, the proposed method improves OCR accuracy by +7.85 points compared with simple interpolation. The creation of an annotated image dataset and the organisation of an international competition (ICDAR2015) highlighted the interest and the relevance of such approaches. Moreover, if a priori knowledge is available, it can be used by a suitable network architecture. For facial images, face features are critical for automatic recognition. A two step method is proposed in which image resolution is first improved, followed by specialised models that focus on the essential features. An off-the-shelf face verification system has its performance improved from +6.91 up to +8.15 points. Finally, to address the variability of real-world low-resolution images, deep neural networks allow to absorb the diversity of the blurring kernels that characterise the low-resolution images. With a single model, high-resolution images are produced with natural image statistics, without any knowledge of the actual observation model of the low-resolution image
29

Kabir, Md Faisal. "Extracting Useful Information and Building Predictive Models from Medical and Health-Care Data Using Machine Learning Techniques". Diss., North Dakota State University, 2020. https://hdl.handle.net/10365/31924.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In healthcare, a large number of medical data has emerged. To effectively use these data to improve healthcare outcomes, clinicians need to identify the relevant measures and apply the correct analysis methods for the type of data at hand. In this dissertation, we present various machine learning (ML) and data mining (DM) methods that could be applied to the type of data sets that are available in the healthcare area. The first part of the dissertation investigates DM methods on healthcare or medical data to find significant information in the form of rules. Class association rule mining, a variant of association rule mining, was used to obtain the rules with some targeted items or class labels. These rules can be used to improve public awareness of different cancer symptoms and could also be useful to initiate prevention strategies. In the second part of the thesis, ML techniques have been applied in healthcare or medical data to build a predictive model. Three different classification techniques on a real-world breast cancer risk factor data set have been investigated. Due to the imbalance characteristics of the data set various resampling methods were used before applying the classifiers. It is shown that there was a significant improvement in performance when applying a resampling technique as compared to applying no resampling technique. Moreover, super learning technique that uses multiple base learners, have been investigated to boost the performance of classification models. Two different forms of super learner have been investigated - the first one uses two base learners while the second one uses three base learners. The models were then evaluated against well-known benchmark data sets related to the healthcare domain and the results showed that the SL model performs better than the individual classifier and the baseline ensemble. Finally, we assessed cancer-relevant genes of prostate cancer with the most significant correlations with the clinical outcome of the sample type and the overall survival. Rules from the RNA-sequencing of prostate cancer patients was discovered. Moreover, we built the regression model and from the model rules for predicting the survival time of patients were generated.
30

Bai, Jiachuan. "Deep learning augmented single molecule localization microscopy reconstruction : enhancing robustness and moving towards live cells". Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS337.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La microscopie optique est une technique centrale en biologie cellulaire, mais la diffraction limite sa résolution à environ 200-300 nm. Par conséquent, de nombreuses structures moléculaires telles que les virus, les pores nucléaires ou les microtubules, ne peuvent pas être résolues. La microscopie de localisation à molécule unique (SMLM) offre une résolution spatiale élevée (20 nm ou mieux), permettant de résoudre les structures biologiques à l'échelle moléculaire. Cependant, la SMLM nécessite l'acquisition de plusieurs milliers d’images à base résolution, ce qui limite ses applications aux cellules fixes ou aux structures à dynamique lente. Pour surmonter cette limitation, Ouyang et al. (2018) ont développé une approche d'apprentissage profond appelée ANNA-PALM, capable de reconstruire des images super-résolutives à partir d'un nombre beaucoup plus réduit d'images à basse résolution. Cependant, la méthode ANNA-PALM originale présente plusieurs contraintes. Tout d'abord, cette méthode présente des artefacts lorsqu'elle est appliquée à des images obtenues avec des protocoles ou dans des conditions expérimentales différents des données d'entraînement. Par ailleurs, ANNA-PALMn'a été démontrée que sur des cellules fixes. Ma thèse vise à résoudre ces limitations : 1) en améliorant la robustesse d’ANNA-PALM pour des données issues de laboratoires ou de conditions expérimentales différents et 2) en l’étendant à la visualisation super-résolutive de structures biologiques dynamiques dans les cellules vivantes. 1. Amélioration de la robustesse d'ANNA PALM : notre laboratoire a développé ShareLoc, une plateforme en ligne (shareloc.xyz) qui permet la collecte, la visualisation et la réutilisation des données SMLM acquises par la communauté de microscopie. Dans un premier temps, j’ai validé les fonctionnalités de la plateforme, effectué la curation des données SMLM, implémenté l'ontologie ShareLoc et rédigé la documentation. Ensuite, j'ai ré-entraîné ANNA-PALM sur des images en plus grande quantité et plus variées partagées sur ShareLoc. J’ai évalué quantitativement la qualité des images reconstruites par rapport au modèle original. J'ai démontré une amélioration significative de la robustesse et de la qualité des reconstructions par ANNA-PALM, en particulier pour des images de microtubules prises dans des conditions biologiques différentes de celles des images d'entraînement. 2. Extension d'ANNA-PALM à la reconstruction de films super-résolutifs de la dynamique structurelle en cellules vivantes : pour éviter le floutage dû à la dynamique des structures, chaque image super-résolutive du film reconstruit est basée sur un faible nombre d'images consécutives à basse résolution, ce qui conduit à un sous-échantillonnage important de la structure, qui restreint la résolution. Bien qu'ANNA-PALM puisse reconstruire des images super-résolutives à partir de images sous-échantillonnées, pour les cellules vivantes l’entraînement du modèle et l'évaluation de la qualité des reconstructions sont plus difficiles en raison de l'absence de vérité terrain. Pour relever ces défis, j'ai développé une méthode pour créer des vérités terrain super-résolutives dynamiques à partir d’images SMLM statiques. J'ai appliqué cette stratégie à des données simulées ainsi qu’à des données expérimentales de microtubules. Ensuite, j'ai étendu l’architecture d’ANNA-PALM à des données 3D dont la troisième dimension est le temps, afin d’exploiter l'information temporelle. J'ai utilisé des simulations de microtubules en mouvement pour évaluer quantitativement la qualité des reconstructions par cette approche, en comparaison avec la méthode ANNA-PALM 2D originale, et en fonction de la vitesse des structures et des taux de localisation. Les résultats montrent que l'incorporation d'informations temporelles améliore considérablement la qualité des reconstructions [...]
While microscopy has been a central technique for cell biology since centuries, it has long been limited by diffraction to a resolution of ~200-300 nm. As a consequence, many molecular structures, such as viruses, nuclear pores, or microtubules were left unresolved. Single-molecule localization microscopy (SMLM) offers a high spatial resolution (e.g., 20 nm or better), allowing to resolve biological structures at or near the molecular scale. However, SMLM acquisition necessitates acquiring many thousands of low-resolution frames, mostly limiting its applications to fixed cells or to structures undergoing slow dynamics. To overcome this limitation, Ouyang et al. (2018) developed a deep learning-based approach called ANNA-PALM that can reconstruct a super-resolution image from much fewer low-resolution frames. However, the original ANNA-PALM method faced several limitations. First, ANNA-PALM had only been trained and tested on images from our laboratory. Second, the method exhibits artifacts when applied to images obtained using different protocols or experimental conditions than the training data. Third, ANNA-PALM had only been demonstrated on fixed cells. The objectives of my Ph.D. thesis are to address these limitations by 1) improving the robustness of ANNA-PALM reconstructions when applied to data obtained from distinct laboratories and 2) extending ANNA-PALM to reconstruct super-resolved time-lapse image sequences for dynamic biologicalstructures in live cells. 1. Improving the robustness of ANNA PALM: an obvious approach to improve robustness is to retrain the model using a larger and more varied data set. However, SMLM datasets are not usually publicly accessible. To address this, our lab developed ShareLoc, an online platform (shareloc.xyz) that allows the gathering and reuse of SMLM datasets acquired by the microscopy community. I first validated the platform's functionalities, curated SMLM data, implemented a ShareLoc ontology, and wrote relevant documentation. Next, I took advantage of ShareLoc data to retrain ANNA-PALM on larger and more diverse images and quantitatively evaluated the image reconstruction quality compared to the original model. I demonstrated that the robustness and reconstruction quality of ANNA-PALM significantly improved, notably when applied to images of microtubules taken under biological perturbation conditions never seen by the model during training. 2. Extending ANNA-PALM to reconstruct super-resolution movies of moving structures in live cells: achieving high-quality super-resolution reconstructions of structural dynamics is challenging. To avoid motion blur, each frame of the reconstructed movie is defined from localizations in only a small number of consecutive low-resolution frames. This leads to a strong under-sampling of the structures by single molecule localization events and does not enable super-resolution. Although ANNA-PALM can reconstruct high-quality super-resolved images from under-sampled localization data, training ANNA-PALM for live cells is more difficult, because a clear ground truth is lacking. The absence of ground truth also makes it difficult to assess reconstruction quality. To address these challenges, I first developed a method to generate ground truth super-resolution movies from static SMLM images obtained from long acquisition sequences. I implemented and tested this strategy using both simulated and experimental SMLM images. Second, I extended the ANNA-PALM architecture to 3D data, where the third dimension is time, in order to incorporate temporal information. I used simulations of microtubule dynamics to quantitatively evaluate the reconstruction quality of this approach in comparison with the original 2D ANNA-PALM, and as function of structure velocity and localization rates. [...]
31

Rizzo, Vittorio. "Ricostruzione di segnali ultrasonici sottocampionati mediante tecniche di compressive sensing e di single-image super-resolution". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La scansione laser ad ultrasuoni è un metodo di monitoraggio strutturale molto efficace nell'ambito del rilevamento dei danni, grazie alla sua natura non distruttiva, alla sensibilità al danno locale e all'elevata risoluzione spaziale. Le immagini ricostruite a seguito della propagazione di onde ultrasoniche nella struttura di interesse consentono di riscontrare la presenza di eventuali danni. Tuttavia, la scansione laser ad ultrasuoni richiede tempi di scansione molto lunghi per ottenere un elevata risoluzione spaziale e un adeguato rapporto segnale rumore, necessari al rilevamento dei danni. L'obiettivo di questo elaborato è quello di presentare un metodo che consente di ricostruire immagini ultrasoniche ad alta risoluzione a partire da un numero ridotto di punti scansionati e, quindi, di risolvere il problema degli eccessivi tempi di scansione. Tale metodo si basa sull'implementazione di due tecniche: il compressive sensing e la super-risoluzione. Il compressive sensing consente di acquisire un segnale a partire da un numero di campioni inferiore rispetto a quello stabilito dal criterio di Shannon-Nyquist, con una perdita ridotta di informazione. La super-risoluzione (SR) è una tecnica che consente di ottenere immagini ad alta risoluzione a partire da immagini a bassa risoluzione. Nel lavoro svolto, l'algoritmo di super-risoluzione riceve in ingresso le immagini prodotte dall'algoritmo di CS e le elabora al fine di ottenere un'immagine "più simile" all'immagine ultrasonica vera.
32

Ecoto, Dicka Geoffrey. "Modélisation et apprentissage machine learning appliqués à l'estimation des dommages consécutifs à la survenance d'un événement de sécheresse par retrait-gonflement des argiles dans le cadre du régime d'indemnisation des catastrophes naturelles français". Electronic Thesis or Diss., Université Paris Cité, 2023. http://www.theses.fr/2023UNIP7182.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Cette thèse est consacrée à l'anticipation de l'impact financier sur les biens assurés de la survenance d'un événement de sécheresse grâce au recours à des méthodes au croisement de la statistique et du machine learning. Le terme sécheresse désigne ici le phénomène de retrait-gonflement des argiles provoquant des dommages aux bâtiments. L'exercice peut être décomposé en deux sous-problèmes que nous abordons tour à tour. Le premier sous-problème considère plus spécifiquement la tâche consistant à prédire quelles communes formuleront une demande de reconnaissance de l'état de catastrophe naturelle au titre de l'événement sécheresse. Le second est consacré à la prédiction de l'impact financier de l'événement sécheresse sur les biens assurés situés dans les communes reconnues en état de catastrophe naturelle. Dans le cadre du premier sous-problème, nous développons, étudions et appliquons un algorithme original pour la prédiction des demandes de reconnaissance de l'état de catastrophe naturelle. L'algorithme bénéficie de deux formalisations complémentaires de la tâche d'intérêt, abordé sous l'angle de la classification supervisée et comme un problème de transport optimal. Les prédictions finales sont obtenues comme moyenne géométrique des deux types de prédictions. Théoriquement, le plan de transport optimal peut être obtenu en appliquant l'algorithme iPiano [Ochs et al., 2015], dont nous prouvons que les hypothèses qui sous-tendent son analyse sont bien vérifiées. L'analyse des prédictions obtenues démontre la pertinence de l'algorithme. Dans le cadre du second sous-problème, nous développons, étudions et appliquons un algorithme original d'agrégation d'algorithmes inspiré du Super Learner [van der Laan, 2007]. Deux écueils doivent être pris en compte. D'une part, parce que le péril sécheresse n'est couvert par le régime d'indemnisation des catastrophes naturelles français que depuis 1989, le nombre d'événements sécheresse sur lesquels nous pouvons entraîner notre algorithme est réduit, chaque événement sécheresse se voyant associer un jeu de données de grande taille. D'autre part, à la dépendance temporelle s'ajoute une dépendance spatiale due notamment aux proximités géographique et administrative entre communes françaises. Fondée sur une modélisation de la dépendance à l'aide d'un graphe de dépendance, l'étude théorique révèle que la brièveté de la série temporelle peut être compensée si la dépendance spatiale est faible. De nouveau, l'analyse des prédictions obtenues démontre la pertinence de notre algorithme
This Ph.D. thesis is dedicated to forecasting the financial impact on insured properties in the event of drought, utilizing methods that merge statistics and machine learning. In this context, "drought" refers to the phenomenon of clay shrinkage and swelling that leads to damage to buildings. The task can be broken down into two sub-problems that we address separately. The first sub-problem focuses on predicting which municipalities will submit a request for the government declaration of natural disaster for a drought event. The second is dedicated to predicting the financial impact of drought events on insured properties located in municipalities that obtained the government declaration of natural disaster for a drought event. For the first sub-problem, we develop, study, and apply an original algorithm to predict requests for the government declaration of natural disaster. The algorithm benefits from two complementary formalizations of the task at hand, approached as both supervised classification and an optimal transport problem. The final predictions are obtained as a geometric mean of these two prediction types. Theoretically, the optimal transport plan can be obtained by applying the iPiano algorithm [Ochs et al., 2015], and we demonstrate that the assumptions underlying its analysis are met. The analysis of the predictions obtained confirms the algorithm's relevance. Regarding the second sub-problem, we develop, investigate, and apply an original aggregation algorithm, inspired by the Super Learner [van der Laan, 2007]. Two challenges must be considered. First, since drought events have only been covered by the French natural disaster compensation scheme since 1989, the number of drought events available for training our algorithm is limited, with each drought event associated with a large dataset. Second, temporal dependence is compounded by spatial dependence, primarily due to geographic and administrative proximity between French municipalities. Based on a dependency modeling using a dependency graph, the theoretical analysis reveals that the brevity of the time series can be compensated if spatial dependence is weak. Once again, the analysis of the predictions obtained underscores the relevance of our algorithm
33

Reis, Saulo Roberto Sodré dos. "Um método iterativo e escalonável para super-resolução de imagens usando a interpolação DCT e representação esparsa". Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-29122014-113437/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Num cenário em que dispositivos de aquisição de imagens e vídeo possuem recursos limitados ou as imagens disponíveis não possuem boa qualidade, as técnicas de super-resolução (SR) apresentam uma excelente alternativa para melhorar a qualidade das imagens. Nesta tese é apresentada uma proposta para super-resolução de imagem única que combina os benefícios da interpolação no domínio da transformada DCT e a eficiência dos métodos de reconstrução baseados no conceito de representação esparsa de sinais. A proposta busca aproveitar as melhorias já alcançadas na qualidade e eficiência computacional dos principais algoritmos de super-resolução existentes. O método de super-resolução proposto implementa algumas melhorias nas etapas de treinamento e reconstrução da imagem final. Na etapa de treinamento foi incluída uma nova etapa de extração de características utilizando técnicas de aguçamento por máscara de nitidez e construção de um novo dicionário. Esta estratégia busca extrair mais informações estruturais dos fragmentos de baixa e alta resolução do conjunto de treinamento e ao mesmo tempo reduzir o tamanho dos dicionários. Outra importante contribuição foi a inclusão de um processo iterativo e escalonável no algoritmo, reinserindo no conjunto de treinamento e na etapa de reconstrução, uma imagem de alta resolução obtida numa primeira iteração. Esta solução possibilitou uma melhora na qualidade da imagem de alta resolução final utilizando poucas imagens no conjunto de treinamento. As simulações computacionais demonstraram a capacidade do método proposto em produzir imagens com qualidade e com tempo computacional reduzido.
In a scenario in which the acquisition systems have limited resources or available images do not have good quality, the super-resolution (SR) techniques have become an excellent alternative for improving the image quality. In this thesis, we propose a single-image super-resolution (SR) method that combines the benefits of the DCT interpolation and efficiency of sparse representation method for image reconstruction. Also, the proposed method seeks to take advantage of the improvements already achieved in quality and computational efficiency of the existing SR algorithms. The proposed method implements some improvements in the dictionary training and the reconstruction process. A new dictionary was built by using an unsharp mask technique to characteristics extraction. Simultaneously, this strategy aim to extract more structural information of the low resolution and high resolution patches and reduce the dictionaries size. Another important contribution was the inclusion of an iterative and scalable process by reinserting the HR image obtained of first iteration. This solution aim to improve the quality of the final HR image using a few images in the training set. The results have demonstrated the ability of the proposed method to produce high quality images with reduced computational time.
34

Ponder, James D. Mattson-Lauters Amy. "Learning from the ads : A triangulated examination of the assault on the last bastion of hegemonic masculinity: the super bowl 2003-2007 /". Diss., A link to full text of this thesis in SOAR, 2007. http://soar.wichita.edu/dspace/handle/10057/1166.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (M.A.)--Wichita State University, College of Liberal Arts and Sciences., Elliott School of Communication.
"May 2007." Title from PDF title page (viewed on Dec. 28, 2007). Thesis adviser: Amy Mattson-Lauters. Includes bibliographic references (leaves 111-122).
35

Zhou, Yi. "Graph-based Mix-out Networks for Video Restoration". Thesis, The University of Sydney, 2019. https://hdl.handle.net/2123/21487.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The video restoration task aims to generate clear and high-quality videos from those noisy or blurry low-quality videos. Different from image restoration tasks, the temporal information is an additional data dimension in videos and it plays a key role in video restoration task. Therefore, how to effectively take the full advantage of the inter-frame temporal information is a challenging problem for recovering video quality. Current approaches attempt to either warp the features based on the estimated motion modality before post-processing or fuse the features (e.g. concatenation) only once at a specific place during the process. In the former solution, it relies on the accuracy of estimating motion representation very much, but the existing algorithms for estimating motion have problems of being either too slow or inaccurate. As for the latter solution, too few times (usually only once) of feature fusion will lead to the loss of the intra-frame information after fusion and insufficient utilization of the inter-frame information. Therefore, to fully utilize the inter-frame information, we design a network with the mix-out mechanism. Our Mix-out network can simultaneously refine the inter-frame and intra-frame information instead of fully relying on only one of them. Besides, we devise a novel Graph-based Pixel Associating module (GPA module) to better infer the pixel-wise correlation of adjacency frames, which to our knowledge is the first pixel-wise GCN module that can be practically applied on low-level tasks. Extensive experiments show that our method can outperform the state-of-the-art methods on video compression artifact removal task (codec JPEG2000 and codec HEVC (x265)) by 0.74dB (q=20), 0.83dB (q=40) for codec JPEG2000 and 0.46dB (qp=37), 0.48dB (qp=32) for codec HEVC(x265) in terms of PSNR. Besides, we further extend our method to other video restoration tasks like video super resolution and video Gaussian denoising, and also achieve the state-of-the-art performance.
36

Kuntala, Prashant Kumar. "Optimizing Biomarkers From an Ensemble Learning Pipeline". Ohio University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1503592057943043.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Hatvani, Janka. "Amélioration des images médicales à l'aide de techniques d'apprentissage profond et de factorisation tensorielle". Electronic Thesis or Diss., Toulouse 3, 2021. http://www.theses.fr/2021TOU30304.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La résolution spatiale des images acquises par tomographie volumique à faisceau conique (CBCT) est limitée par la géométrie des capteurs, leur sensibilité, les mouvements du patient, les techniques de reconstruction d'images et la limitation de la dose de rayonnement. Le modèle de dégradation d'image considéré dans cette thèse consiste en un opérateur de ou avec la fonction d'étalement du système d'imagerie (PSF), un opérateur de décimation, et du bruit, qui relient les volumes CBCT à une image 3D super-résolue à estimer. Les méthodes proposées dans cette thèse (SISR - single image super-résolution) ont comme objectif d'inverser ce modèle direct, c'est à dire d'estimer un volume haute résolution à partir d'une image CBCT. Les algorithmes ont été évalués dans le cadre d'une application dentaire, avec comme vérité terrain les images haute résolution acquises par micro CT (µCT), qui utilise des doses de rayonnement très importantes, incompatibles avec les applications cliniques. Nous avons proposé une approche de SISR par deep learning, appliquée individuellement à des coupes CBCT. Deux types de réseaux ont été évalués : U-net et subpixel. Les deux ont amélioré les volumes CBCT, avec un gain en PSNR de 21 à 22 dB et en coefficient de Dice pour la segmentation canalaire de 1 à 2.2 %. Le gain a été plus particulièrement important dans la partie apicale des dents, ce qui représente un résultat important étant donnée son importance pour les applications cliniques. Nous avons proposé des algorithmes de SISR basés sur la décomposition canonique polyadique des tenseurs. Le principal avantage de cette méthode, lié à l'utilisation de la théorie des tenseur, est d'utiliser la structure 3D des volumes CBCT. L'algorithme proposé regroupe plusieurs étapes: débruitage base sur la factorisation des tenseurs, déconvolution et super-résolution, avec un faible nombre d'hyperparamètres. Le temps d'exécution est très faible par rapport aux algorithmes existants (deux ordres de magnitude plus petit), pour des performances légèrement supérieures (gain de 1.2 à 1.5 dB en PSNR). La troisième contribution de la thèse est en lien avec la contribution 2 : l'algorithme de SISR basé sur la décomposition canonique polyadique des tenseurs est combiné avec une méthode d'estimation de la PSF, inconnues dans les applications pratiques. L'algorithme résultant effectue les deux tâche de manière alternée, et s'avère précis et rapide sur des données de simulation et expérimentales. La dernière contribution de la thèse a été d'évaluer l'intérêt d'un autre type de décomposition tensorielle, la décomposition de Tucker, dans le cadre d'un algorithme de SISR. Avant la déconvolution, le volume CBCT est débruité en tronquant sa décomposition de Tucker. Comparé à l'algorithme de la contribution 2, cette approche permet de diminuer encore plus le temps de calcul, d'un facteur 10, pour des performances similaires pour des SNR importants et légèrement supérieures pour de faibles SNR. Le lien entre cette méthode et les algorithmes 2D basés sur une SVD facilite le réglage des hyperparamètres comparé à la décomposition canonique polyadique
The resolution of dental cone beam computed tomography (CBCT) images is imited by detector geometry, sensitivity, patient movement, the reconstruction technique and the need to minimize radiation dose. The corresponding image degradation model assumes that the CBCT image is a blurred (with a point spread function, PSF), downsampled, noisy version of a high resolution image. The quality of the image is crucial for precise diagnosis and treatment planning. The methods proposed in this thesis aim to give a solution for the single image super-resolution (SISR) problem. The algorithms were evaluated on dental CBCT and corresponding highresolution (and high radiation-dose) µCT image pairs of extracted teeth. I have designed a deep learning framework for the SISR problem, applied to CBCT slices. I have tested the U-net and subpixel neural networks, which both improved the PSNR by 21-22 dB, and the Dice coe_cient of the canal segmentation by 1-2.2%, more significantly in the medically critical apical region. I have designed an algorithm for the 3D SISR problem, using the canonical polyadic decomposition of tensors. This implementation conserves the 3D structure of the volume, integrating the factorization-based denoising, deblurring with a known PSF, and upsampling of the image in a lightweight algorithm with a low number of parameters. It outperforms the state-of-the-art 3D reconstruction-based algorithms with two orders of magnitude faster run-time and provides similar PSNR (improvement of 1.2-1.5 dB) and segmentation metrics (Dice coe_cient increased on average to 0.89 and 0.90). Thesis II b: I have implemented a joint alternating recovery of the unknown PSF parameters and of the high-resolution 3D image using CPD-SISR. The algorithm was compared to a state-of-the-art 3D reconstruction-based algorithm, combined with the proposed alternating PSF-optimization. The two algorithms have shown similar improvement in PSNR, but CPD-SISR-blind converged roughly 40 times faster, under 6 minutes both in simulation and on experimental dental computed tomography data. I have proposed a solution for the 3D SISR problem using the Tucker decomposition (TD-SISR). The denoising step is realized _rst by TD in order to mitigate the ill-posedness of the subsequent deconvolution. Compared to CPDSISR the algorithm runs ten times faster. Depending on the amount of noise, higher PSNR (0.3 - 3.5 dB), SSI (0.58 - 2.43%) and segmentation values (Dice coefficient, 2% improvement) were measured. The parameters in TD-SISR are familiar from 2D SVD-based algorithms, so their tuning is easier compared to CPD-SISR
38

Cho, Myung. "Convex and non-convex optimizations for recovering structured data: algorithms and analysis". Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5922.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Optimization theories and algorithms are used to efficiently find optimal solutions under constraints. In the era of “Big Data”, the amount of data is skyrocketing,and this overwhelms conventional techniques used to solve large scale and distributed optimization problems. By taking advantage of structural information in data representations, this thesis offers convex and non-convex optimization solutions to various large scale optimization problems such as super-resolution, sparse signal processing,hypothesis testing, machine learning, and treatment planning for brachytherapy. Super-resolution: Super-resolution aims to recover a signal expressed as a sum of a few Dirac delta functions in the time domain from measurements in the frequency domain. The challenge is that the possible locations of the delta functions are in the continuous domain [0,1). To enhance recovery performance, we considered deterministic and probabilistic prior information for the locations of the delta functions and provided novel semidefinite programming formulations under the information. We also proposed block iterative reweighted methods to improve recovery performance without prior information. We further considered phaseless measurements, motivated by applications in optic microscopy and x-ray crystallography. By using the lifting method and introducing the squared atomic norm minimization, we can achieve super-resolution using only low frequency magnitude information. Finally, we proposed non-convex algorithms using structured matrix completion. Sparse signal processing: L1 minimization is well known for promoting sparse structures in recovered signals. The Null Space Condition (NSC) for L1 minimization is a necessary and sufficient condition on sensing matrices such that a sparse signal can be uniquely recovered via L1 minimization. However, verifying NSC is a non-convex problem and known to be NP-hard. We proposed enumeration-based polynomial-time algorithms to provide performance bounds on NSC, and efficient algorithms to verify NSC precisely by using the branch and bound method. Hypothesis testing: Recovering statistical structures of random variables is important in some applications such as cognitive radio. Our goal is distinguishing two different types of random variables among n>>1 random variables. Distinguishing them via experiments for each random variable one by one takes lots of time and efforts. Hence, we proposed hypothesis testing using mixed measurements to reduce sample complexity. We also designed efficient algorithms to solve large scale problems. Machine learning: When feature data are stored in a tree structured network having time delay in communication, quickly finding an optimal solution to the regularized loss minimization is challenging. In this scenario, we studied a communication-efficient stochastic dual coordinate ascent and its convergence analysis. Treatment planning: In the Rotating-Shield Brachytherapy (RSBT) for cancer treatment, there is a compelling need to quickly obtain optimal treatment plans to enable clinical usage. However, due to the degree of freedom in RSBT, finding optimal treatment planning is difficult. For this, we designed a first order dose optimization method based on the alternating direction method of multipliers, and reduced the execution time around 18 times compared to the previous research.
39

Rajnoha, Martin. "Určování podobnosti objektů na základě obrazové informace". Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-437979.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Monitoring of public areas and their automatic real-time processing became increasingly significant due to the changing security situation in the world. However, the problem is an analysis of low-quality records, where even the state-of-the-art methods fail in some cases. This work investigates an important area of image similarity – biometric identification based on face image. The work deals primarily with the face super-resolution from a sequence of low-resolution images and it compares this approach to the single-frame methods, that are still considered as the most accurate. A new dataset was created for this purpose, which is directly designed for the multi-frame face super-resolution methods from the low-resolution input sequence, and it is of comparable size with the leading world datasets. The results were evaluated by both a survey of human perception and defined objective metrics. A hypothesis that multi-frame methods achieve better results than single-frame methods was proved by a comparison of both methods. Architectures, source code and the dataset were released. That caused a creation of the basis for future research in this field.
40

Reza, Katebi. "Nuclear Outbursts in the Centers of Galaxies". Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1573031465540983.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Mohammadiha, Nasser. "Speech Enhancement Using Nonnegative MatrixFactorization and Hidden Markov Models". Doctoral thesis, KTH, Kommunikationsteori, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-124642.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Reducing interference noise in a noisy speech recording has been a challenging task for many years yet has a variety of applications, for example, in handsfree mobile communications, in speech recognition, and in hearing aids. Traditional single-channel noise reduction schemes, such as Wiener filtering, do not work satisfactorily in the presence of non-stationary background noise. Alternatively, supervised approaches, where the noise type is known in advance, lead to higher-quality enhanced speech signals. This dissertation proposes supervised and unsupervised single-channel noise reduction algorithms. We consider two classes of methods for this purpose: approaches based on nonnegative matrix factorization (NMF) and methods based on hidden Markov models (HMM).  The contributions of this dissertation can be divided into three main (overlapping) parts. First, we propose NMF-based enhancement approaches that use temporal dependencies of the speech signals. In a standard NMF, the important temporal correlations between consecutive short-time frames are ignored. We propose both continuous and discrete state-space nonnegative dynamical models. These approaches are used to describe the dynamics of the NMF coefficients or activations. We derive optimal minimum mean squared error (MMSE) or linear MMSE estimates of the speech signal using the probabilistic formulations of NMF. Our experiments show that using temporal dynamics in the NMF-based denoising systems improves the performance greatly. Additionally, this dissertation proposes an approach to learn the noise basis matrix online from the noisy observations. This relaxes the assumption of an a-priori specified noise type and enables us to use the NMF-based denoising method in an unsupervised manner. Our experiments show that the proposed approach with online noise basis learning considerably outperforms state-of-the-art methods in different noise conditions.  Second, this thesis proposes two methods for NMF-based separation of sources with similar dictionaries. We suggest a nonnegative HMM (NHMM) for babble noise that is derived from a speech HMM. In this approach, speech and babble signals share the same basis vectors, whereas the activation of the basis vectors are different for the two signals over time. We derive an MMSE estimator for the clean speech signal using the proposed NHMM. The objective evaluations and performed subjective listening test show that the proposed babble model and the final noise reduction algorithm outperform the conventional methods noticeably. Moreover, the dissertation proposes another solution to separate a desired source from a mixture with arbitrarily low artifacts.  Third, an HMM-based algorithm to enhance the speech spectra using super-Gaussian priors is proposed. Our experiments show that speech discrete Fourier transform (DFT) coefficients have super-Gaussian rather than Gaussian distributions even if we limit the speech data to come from a specific phoneme. We derive a new MMSE estimator for the speech spectra that uses super-Gaussian priors. The results of our evaluations using the developed noise reduction algorithm support the super-Gaussianity hypothesis.

QC 20130916

42

Holub, Jiří. "Zvýšení kvality fotografie s použitím hlubokých neuronových sítí". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-377334.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This diploma thesis deals with image super-resolution with conservation of good quality. Firstly, there are described state of the art methods dealing with this problem, as well as principles of neural networks with focus on convolutional ones. Finally, there is described a few models of convolutional neural network for image super-resolution to double size, which have been trained, tested and compared on newly created database with pictures of people.
43

McNamara, J. David. "Re-imaging catechesis, an approach to intergenerational cooperative learning in sacrament preparation". Chicago, Ill : McCormick Theological Seminary, 1999. http://www.tren.com.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Withers, Denissia Elizabeth. "Engaging Community Food Systems through Learning Garden Programs: Oregon Food Bank's Seed to Supper Program". PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/609.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The purpose of this study was to discover whether learning garden programs increase access to locally grown foods and successfully empower and include food insecure populations. This study examined the Oregon Food Bank's Seed to Supper program which situates garden-based learning in food insecure communities. Through a mixed-methods community-based research process, this study found that community building, learner empowerment and sustainability leadership in place-based learning garden programs increased access to locally grown foods for food insecure populations. When food insecure populations participated in these learning garden programs they often engaged in practices described in the literature as the "web of inclusion" (Helgesen, 1995). When food insecure populations were engaged in these practices, participation in food democracy and food justice increased. Additionally, participation in learning gardens led to sustainability leadership and increased access to food literacy, which led to greater community health and engaged, local community food systems.
45

Baptiste, Julien. "Problèmes numériques en mathématiques financières et en stratégies de trading". Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLED009.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Le but de cette thèse CIFRE est de construire un portefeuille de stratégies de trading algorithmique intraday. Au lieu de considérer les prix comme une fonction du temps et d'un aléa généralement modélisé par un mouvement brownien, notre approche consiste à identifier les principaux signaux auxquels sont sensibles les donneurs d'ordres dans leurs prises de décision puis alors de proposer un modèle de prix afin de construire des stratégies dynamiques d'allocation de portefeuille. Dans une seconde partie plus académique, nous présentons des travaux de pricing d'options européennes et asiatiques
The aim of this CIFRE thesis is to build a portfolio of intraday algorithmic trading strategies. Instead of considering stock prices as a function of time and a brownian motion, our approach is to identify the main signals affecting market participants when they operate on the market so we can set up a prices model and then build dynamical strategies for portfolio allocation. In a second part, we introduce several works dealing with asian and european option pricing
46

Defrance, Anne. "Nature du savoir et formulation des définitions dans les cours de mathématiques du secondaire". Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210174.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Quelle est la nature des mathématiques enseignées dans les classes de l’enseignement secondaire? Dans quelle mesure l’enseignement des compétences n’handicape-t-il pas l’accès à des mathématiques telles qu’en rêvent les enseignants ?Ce qu’on enseigne a les caractéristiques d’un texte, d’une forme scripturale qui présente des différences avec la forme orale d’une société qui ne connaît pas l’écriture. Il apparaît que la formulation des définitions présente un outil performant pour cette analyse. Les investigations empiriques dévoilent, à travers quatre tensions, les difficultés qu’ont les enseignants à faire entrer leurs élèves dans l’apprentissage d’une théorie mathématique. L’analyse des différentes manières de valider ce qu’ils enseignent conduit à montrer dans quelle situation problématique se trouve l’enseignement des mathématiques aujourd’hui. Un remède serait l’apprentissage de la compétence idiomatique.

What is the nature of the Mathematics which are taught in secondary education classes (pupils from 12 to 18 years old)? How far does it impair learning mathematics like teachers dream them ?The taught matter shows the features of a text, of a scriptural form showing up differences with the oral form of a society without writing. The formulation of definitions appears to be a powerful tool to perform this analysis. Empirical investigations reveal through four tensions, how hardly the teachers bring their pupils into the learning of a mathematical theory. The analysis of the various ways to validate what they teach leads to show in what serious difficulties is today the teaching of mathematics. A remedy could be the learning of idiomatic competence.


Doctorat en Sciences Psychologiques et de l'éducation
info:eu-repo/semantics/nonPublished

47

Chiang, Hao-Tien, e 姜昊天. "Integrated Learning-Based Super Resolution". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/71289407585998299187.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立臺灣大學
資訊網路與多媒體研究所
99
Nowadays, the requirement for image resolution increases fiercely. However, the cost of high resolution images obtained from those modern devices is usually expensive, and it is not easy for people to afford. Therefore, the techniques called “super-resolution” enhancing the low resolution image to higher one are quite important. In recent decades, many researches were dedicated in this field and plenty of algorithms were proposed. In this thesis, we present an integrated learning-based super-resolution. Learning-based super-resolution techniques model the co-occurrence patterns between the high and low resolution patches of example images to estimate the missing details for low resolution input. Our system has two parts: training phase and synthesis phase. In the training phase, we construct a database. And in synthesis phase, we retrieve some suitable data and build multi-scale self-similarity model to update the database. We choose corresponding super-resolution algorithms based on different content, and we use back-projection to enforce global reconstruction constraint, and then enhance details of the super-resolved image. Comparing to existing learning-based approaches, our proposed method significantly improves image quality, and the produced super-resolution images have sharp edges and rich details; moreover, the algorithm is very efficient.
48

Cheng, Yi-Chi, e 鄭義錡. "Super Resolution for e-Learning". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/08190088633924527081.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立臺灣大學
電機工程學研究所
99
Because of the development of image display devices, people require much better quality of images. The high-level image capture devices are still expensive, therefore we usually use software to enhance the quality of images. Super- resolution algorithm is a popular research domain in digital image processing, and the applications are widespread, including the military, surveillance, and so on. Recently, the requirement of on-line digital learning is much higher, but many teachers do not have good image capture devices, they cannot record teaching contents with high quality. In this paper, we propose a method to enhance the normal teaching images, especially for the teaching mode using black board, white board, or projection screen. Though super-resolution algorithm can enhance the image resolution, the large execution time and disregarding the image content are the problems in majority of super-resolution algorithm. In this paper, we use edge detection to estimate the image edge density of small blocks, and use mean shift to implement color segmentation. We can integrate the above information to determine where people pay attention to mostly, where secondly, and where we do not care, and use different complexity algorithms to process them. By the experiment result, we only have to process 20% area of the whole image, and decrease execution time significantly. Simultaneously, we can only get an image output with good quality.
49

Po-HungKuo e 郭柏宏. "Image Super Resolution Based on Deep Learning". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/35559245822499458738.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立成功大學
電機工程學系
104
We develop two super resolution methods by different deep learning architecture. The first is the convolutional restricted Boltzmann machine (CRBM), the second is the convolutional neural network (CNN). To accelerate the training procedure, we implement the paralleled training algorithms by a GPU. Our experiments reveals that the super resolution performance of our works is equivalent to that of sparse coding while our processing speed is much faster.
50

黃士豪. "Local Learning-Based Image Super-Resolution on License Plates". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/92587677163522747428.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立臺灣師範大學
資訊工程研究所
100
Surveillance camera equipment is mounted, video surveillance systems even more important. Video records are often turned into the police handling assistance resources through license plate recognition. License plate recognition (LPR) usually plays an important role in video surveillance systems. In order to save costs, the resolution of these surveillance cameras is usually not too high, the objective of super- resolution (SR) on license plate images is to enhance the resolution of those images. In this paper, we propose a learning-based SR approach on license plate images. First, several high-resolution (HR) license plate images and the generated corresponding LR ones are first collected as the training images. Next, the clustered HR and LR patch pairs are obtained from the training images. Then, license plates are extracted from a LR traffic surveillance image and cut into overlapped patches, and the clustered HR and LR patch pairs are used to generate the HR patch for each cut LR patch by using locally linear embedding (LLE) algorithm. Finally, the HR license plate images can be reconstruction. Preliminary experiments on realistic image data demonstrate the applicability of the proposed approach.

Vai alla bibliografia