Rozprawy doktorskie na temat „Transfert de style”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Transfert de style”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Mohammed, Omar. "Méthodes d'apprentissage approfondi pour l'extraction et le transfert de style". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT035.
Pełny tekst źródłaOne aspect of a successful human-machine interface (e.g. human-robot interaction, chatbots, speech, handwriting…,etc) is the ability to have a personalized interaction. This affects the overall human experience, and allow for a more fluent interaction. At the moment, there is a lot of work that uses machine learning in order to model such interactions. However, these models do not address the issue of personalized behavior: they try to average over the different examples from different people in the training set. Identifying the human styles (persona) opens the possibility of biasing the models output to take into account the human preference. In this thesis, we focused on the problem of styles in the context of handwriting.Defining and extracting handwriting styles is a challenging problem, since there is no formal definition for those styles (i.e., it is an ill-posed problem). Styles are both social - depending on the writer's training, especially in middle school - and idiosyncratic - depends on the writer's shaping (letter roundness, sharpness…,etc) and force distribution over time. As a consequence, there are no easy/generic metrics to measure the quality of style in a machine behavior.We may want to change the task or adapt to a new person. Collecting data in the human-machine interface domain can be quite expensive and time consuming. Although most of the time the new task has many things in common with the old task, traditional machine learning techniques fail to take advantage of this commonality, leading to a quick degradation in performance. Thus, one of the objectives of my thesis is to study and evaluate the idea of transferring knowledge about the styles between different tasks, within the machine learning paradigm.The objective of my thesis is to study these problems of styles, in the domain of handwriting. Available to us is IRONOFF dataset, an online handwriting datasets, with 410 writers, with ~25K examples of uppercase, lowercase letters and digits drawings. For transfer learning, we used an extra dataset, QuickDraw!, a sketch drawing dataset containing ~50 million drawing over 345 categories.Major contributions of my thesis are:1) Propose a work pipeline to study the problem of styles in handwriting. This involves proposing methodology, benchmarks and evaluation metrics.We choose temporal generative models paradigm in deep learning in order to generate drawings, and evaluate their proximity/relevance to the intended/ground truth drawings. We proposed two metrics, to evaluate the curvature and the length of the generated drawings. In order to ground those metics, we proposed multiple benchmarks - which we know their relative power in advance -, and then verified that the metrics actually respect the relative power relationship.2) Propose a framework to study and extract styles, and verify its advantage against the previously proposed benchmarks.We settled on the idea of using a deep conditioned-autoencoder in order to summarize and extract the style information, without the need to focus on the task identity (since it is given as a condition). We validate this framework to the previously proposed benchmark using our evaluation metrics. We also to visualize on the extracted styles, leading to some exciting outcomes!3) Using the proposed framework, propose a way to transfer the information about styles between different tasks, and a protocol in order to evaluate the quality of transfer.We leveraged the deep conditioned-autoencoder used earlier, by extract the encoder part in it - which we believe had the relevant information about the styles - and use it to in new models trained on new tasks. We extensively test this paradigm over a different range of tasks, on both IRONOFF and QuickDraw! datasets. We show that we can successfully transfer style information between different tasks
Cifka, Ondrej. "Deep learning methods for music style transfer". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT029.
Pełny tekst źródłaRecently, deep learning methods have enabled transforming musical material in a data-driven manner. The focus of this thesis is on a family of tasks which we refer to as (one-shot) music style transfer, where the goal is to transfer the style of one musical piece or fragment onto another.In the first part of this work, we focus on supervised methods for symbolic music accompaniment style transfer, aiming to transform a given piece by generating a new accompaniment for it in the style of another piece. The method we have developed is based on supervised sequence-to-sequence learning using recurrent neural networks (RNNs) and leverages a synthetic parallel (pairwise aligned) dataset generated for this purpose using existing accompaniment generation software. We propose a set of objective metrics to evaluate the performance on this new task and we show that the system is successful in generating an accompaniment in the desired style while following the harmonic structure of the input.In the second part, we investigate a more basic question: the role of positional encodings (PE) in music generation using Transformers. In particular, we propose stochastic positional encoding (SPE), a novel form of PE capturing relative positions while being compatible with a recently proposed family of efficient Transformers.We demonstrate that SPE allows for better extrapolation beyond the training sequence length than the commonly used absolute PE.Finally, in the third part, we turn from symbolic music to audio and address the problem of timbre transfer. Specifically, we are interested in transferring the timbre of an audio recording of a single musical instrument onto another such recording while preserving the pitch content of the latter. We present a novel method for this task, based on an extension of the vector-quantized variational autoencoder (VQ-VAE), along with a simple self-supervised learning strategy designed to obtain disentangled representations of timbre and pitch. As in the first part, we design a set of objective metrics for the task. We show that the proposed method is able to outperform existing ones
Fares, Mireille. "Multimodal Expressive Gesturing With Style". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS017.
Pełny tekst źródłaThe generation of expressive gestures allows Embodied Conversational Agents (ECA) to articulate the speech intent and content in a human-like fashion. The central theme of the manuscript is to leverage and control the ECAs’ behavioral expressivity by modelling the complex multimodal behavior that humans employ during communication. The driving forces of the Thesis are twofold: (1) to exploit speech prosody, visual prosody and language with the aim of synthesizing expressive and human-like behaviors for ECAs; (2) to control the style of the synthesized gestures such that we can generate them with the style of any speaker. With these motivations in mind, we first propose a semantically aware and speech-driven facial and head gesture synthesis model trained on the TEDx Corpus which we collected. Then we propose ZS-MSTM 1.0, an approach to synthesize stylized upper-body gestures, driven by the content of a source speaker’s speech and corresponding to the style of any target speakers, seen or unseen by our model. It is trained on PATS Corpus which includes multimodal data of speakers having different behavioral style. ZS-MSTM 1.0 is not limited to PATS speakers, and can generate gestures in the style of any newly coming speaker without further training or fine-tuning, rendering our approach zero-shot. Behavioral style is modelled based on multimodal speakers’ data - language, body gestures, and speech - and independent from the speaker’s identity ("ID"). We additionally propose ZS-MSTM 2.0 to generate stylized facial gestures in addition to the upper-body gestures. We train ZS-MSTM 2.0 on PATS Corpus, which we extended to include dialog acts and 2D facial landmarks
Shen, Tianxiao. "Language style transfer". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/117822.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 41-45).
This thesis studies style transfer on the basis of non-parallel text. This is an instance of a broad family of problems including machine translation, decipherment, and attribute modication. The key challenge is to separate the content from style in an unsupervised manner. We assume a shared latent content distribution across different text corpora, and propose a method that leverages refined alignment of latent representations to perform style transfer. The transferred sentences from one style should match example sentences from the other style as a population. To demonstrate the flexibility of the proposed model, we test it on three tasks: sentiment modication, decipherment of word substitution ciphers, and word order recovery. In both automatic and human evaluation our method achieves strong performance.
by Tianxiao Shen.
S.M.
Hart, David Marvin. "Light-Field Style Transfer". BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7763.
Pełny tekst źródłaGraffieti, Gabriele. "Style Transfer with Generative Adversarial Networks". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/17015/.
Pełny tekst źródłaMatthews, Nicholas (Nicholas J. ). "Evaluating style transfer in natural language". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119734.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 46-47).
Style transfer is an active area of research growing in popularity in the Natural Language setting. The goal of this thesis is present a comprehensive review of style transfer tasks used to date, analyze these tasks, and delineate important properties and candidate tasks for future methods researchers. Several challenges still exist, including the difficulty of distinguishing between content and style in a sentence. While some state of the art models attempt to overcome this problem, even tasks as simple as sentiment transfer are still non-trivial. Problems of granularity, transferability, and distinguishability have yet to be solved. I provide a comprehensive analysis of the popular sentiment transfer task along with a number of metrics that highlight its shortcomings. Finally, I introduce possible new tasks for consideration, news outlet style transfer and non-parallel error correction, and provide similar analysis for the feasibility of using these tasks as style transfer baselines.
by Nicholas Matthews.
M. Eng.
Battilana, Pietro. "Convolutional Neural Networks for Image Style Transfer". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16770/.
Pełny tekst źródłaShih, YiChang. "Data-driven photographic style using local transfer". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99846.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages 139-154).
After taking pictures, photographers often seek to convey their unique moods by altering the style of their photographs, which can involve meticulous contrast management, lighting, dodging, and burning. In this sense, not only are advanced photographers concerned about their pictures' styles; casual photographers who take pictures with cellphone cameras also process their pictures using built-in applications to adjust the image's luminance, coloring, and details. In general, photographers who stylize pictures give them new, different visual appearances, while also preserving the original content. In this context, we investigate problems with novel image stylization, including reproducing the precise time-of-day where the lighting and atmosphere can make a landscape glow, and making a portrait style resemble that created by a renowned photographer. Given an already captured image, however, automatically achieving given styles is challenging. In fact, changing the appearance in a photograph to mimic another time-of-day requires the analysis and modeling of complex 3-D physical light interactions in the scene, while reproducing a portrait photographer's unique style require computers to acquire artistic tastes and a glimpse of the artist's creative process. In this dissertation, we sidestep these Al-complete problems to instead leverage the power of data. We exploit an image database consisting of time-lapse data describing variations in scene appearance during the course of an entire day, and stylish portraits that are already deliberately processed by artists. To leverage these data, we present new algorithms that put input images in dense and local correspondence with examples. In our first method, we change the time-of-day with a single image as the input, which we put in correspondence with a reference time-lapse video. We then extract the local appearance transformations between different frames of the reference, and apply them to the input. In our second method, we transfer the style of a portrait onto a new input by way of local and multi-scale transformations. We demonstrate our methods on public datasets and a large set of photos downloaded from the Internet. We show that we can successfully handle lightings at different times of day and styles by a variety of different artists.
by YiChang Shih.
Ph. D.
Tao, Joakim, i David Thimrén. "Smoothening of Software documentation : comparing a self-made sequence to sequence model to a pre-trained model GPT-2". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-178186.
Pełny tekst źródłaThis thesis was presented on June 22, 2021, the presentation was done online on Microsoft teams.
FU, XIANGYU. "An Implementation of the Neural Style Transfer Algorithm". Scholarship @ Claremont, 2018. http://scholarship.claremont.edu/cmc_theses/1986.
Pełny tekst źródłaBae, Soonmin. "Statistical analysis and transfer of coarse-grain pictorial style". Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34639.
Pełny tekst źródłaIncludes bibliographical references (p. 96-103).
We show that image statistics can be used to analyze and transfer simple notions of pictorial style of paintings and photographs. We characterize the frequency content of pictorial styles, such as multi-scale, spatial variations, and anisotropy properties, using a multi-scale and oriented decomposition, the steerable pyramid. We show that the average of the absolute steerable coefficients as a function of scale characterizes simple notions of "look" or style. We extend this approach to account for image non-stationarity, that is, we capture and transfer the spatial variations of multi-scale content. In addition, we measure the standard deviation of the steerable coefficients across orientation, which characterizes image anisotropy and permits analysis and transfer of oriented structures. We focus on the statistical features that can be transferred. Since we couple analysis and transfer, our statistical model and transfer tools are consistent with the visual effect of pictorial styles. For this reason, our technique leads to more intuitive manipulation and interpolation of pictorial styles. In addition, our statistical model can be used to classify and retrieve images by style.
by Soonmin Bae.
S.M.
Lindelöf, Anna. "Expanding Paintings Data for Object Classification via Style Transfer". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-255008.
Pełny tekst źródłaDjupa faltningsnätverk och stora, offentligt tillgängliga dataset bidrar till stora framsteg inom datorseende. För klassificering av objekt består dom tillgängliga datamängderna oftast av fotografier. När man tränar nätverk för att klassificera objekt i målningar är tillgången till data från samma domän därför oftast begränsad och fotografier används vanligen istället. Detta resulterar dock ofta i ett lägre klassificeringsresultat på grund av domänskiftet mellan träningsdata och måldata.I detta examensarbete undersöker vi möjligheten att använda den neurala stilöverföringsalgoritmen för att skapa syntetiska målningar som kan användas för att öka mängden av träningsdata. Inför våra experiment sammanställde vi ett dataset av över 14 500 målningar. Särdragsrepresentationer extraherades från bilderna med hjälp av ett förtränat faltningsnätverk. En enkel klassificerare tränades sedan på en mängd träningsdata bestående av målningar, till vilket olika procent av syntetiska målningar adderades.Samma experiment upprepades när fotografier adderades.Vi fann att resultatet av klassificeringen kan förbättras genom att öka mängden träningsdata med hjälp av antingen syntetiska målningar eller fotografier. Vårt resultat styrker inte en generell preferens för syntetiska målningar över fotografier. Dock visar vi att under vissa förutsättningar, så som för vissa objektklasser, ger användandet av syntetiska målningar den största förbättringen i klassificering. Slutligen identifierar vi två huvudsakliga problemområden då den neurala stilöverföringsalgoritmen används för att skapa syntetiska målningar för ändamålet att öka mängden av träningsdata. Det första problemet är överföringen av brus till det syntetiska målningarna. Det andra problemet är att algoritmen endast överför målningens stil och därför ignorerar andra skillnader mellan fotografier och målningar i hur objekten skildras så som objektets storlek, bildens sammansättning och hur objektets utseende ändras över tid. Detta ansågs vara ett relevant problem då vi fann att fotografierna och målningarna skiljde sig åt i dessa avseenden samt i deras extraherade särdragsrepresentationer.
Groneman, Kathryn Jane. "The Trouble with Transfer". BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/2164.
Pełny tekst źródłaChen, Tian Qi. "Deep kernel mean embeddings for generative modeling and feedforward style transfer". Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/62668.
Pełny tekst źródłaScience, Faculty of
Computer Science, Department of
Graduate
Stoltzfus, Kevin Matthew. "The Relationship between Teachers' Training Transfer and their Perceptions of Principal Leadership Style". Diss., The University of Arizona, 2010. http://hdl.handle.net/10150/194856.
Pełny tekst źródłaSpicer, David Philip. "Mental models, cognitive style, and organisational learning : the development of shared understanding in organisations". Thesis, University of Plymouth, 2000. http://hdl.handle.net/10026.1/363.
Pełny tekst źródłaLespinats, Sylvain. "Style du génome exploré par analyse textuelle de l'ADN". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2006. http://tel.archives-ouvertes.fr/tel-00151611.
Pełny tekst źródłaPartant de ces constatations, nous avons mis en place des procédures d'évaluation des distances entre signatures de façon à rendre plus manifeste les informations biologiques sur lesquelles s'appuient nos analyses. Une méthode de projection non-linéaire des voisinages y est associée ce qui permet de s'affranchir des problèmes de grande dimension et de visualiser l'espace occupé par les données. L'analyse des relations entre les signatures pose le problème de la contribution de chaque variable (les mots) à la distance entre les signatures. Un Z-score original basé sur la variation de la fréquence des mots le long des génomes a permis de quantifier ces contributions. L'étude des variations de l'ensemble des fréquences le long d'un génomes permet d'extraire des segments originaux. Une méthode basée sur l'analyse du signal permet d'ailleurs de segmenter précisément ces zones originales.
Grâce à cet ensemble de méthodes, nous proposons des résultats biologiques. En particulier, nous mettons en évidence une organisation de l'espace des signatures génomiques cohérente avec la taxonomie des espèces. De plus, nous constatons la présence d'une syntaxe de l'ADN : il existe des « mots à caractère syntaxique » et des « mots à caractère sémantique », la signature s'appuyant surtout sur les mots à caractère syntaxique. Enfin, l'analyse des signatures le long du génome permet une détection et une segmentation précise des ARN et de probables transferts horizontaux. Une convergence du style des transferts horizontaux vers la signature de l'hôte a d'ailleurs pu être observée.
Des résultats variés ont été obtenus par analyse des signatures. Ainsi, la simplicité d'utilisation et la rapidité de l'analyse des séquences par signatures en font un outil puissant pour extraire de l'information biologique à partir des génomes.
Flowers, Candice April. "Backward Transfer of Apology Strategies from Japanese to English: Do English L1 Speakers Use Japanese-Style Apologies When Speaking English?" BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/6953.
Pełny tekst źródłaHamid, Zurina binti Abdul. "Managerial tacit knowledge transfer and the mediating role of leader-member-exchange and cognitive style". Thesis, University of Hull, 2012. http://hydra.hull.ac.uk/resources/hull:5831.
Pełny tekst źródłaValida, Abelardo Cutamora. "Becoming World-Class Universities Singapore Style: Are Organized Research Units the Answer?" Diss., The University of Arizona, 2009. http://hdl.handle.net/10150/195022.
Pełny tekst źródłaWebb, Arnold. "Fourier transform based investment styles on the Johannesburg Stock Exchange". Diss., University of Pretoria, 2013. http://hdl.handle.net/2263/39956.
Pełny tekst źródłaDissertation (MBA)--University of Pretoria, 2013.
mngibs2014
Gordon Institute of Business Science (GIBS)
MBA
Unrestricted
Tao, Wang. "Adapting multiple datasets for better mammography tumor detection". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231867.
Pełny tekst źródłaI Sverige går kvinnor i åldrarna mellan 40 och 74 igenom regelbunden screening av sina bröst med 18-24 månaders mellanrum. Screeningen innbär huvudsakligen att ta mammogram och att låta radiologer analysera dem för att upptäcka tecken på bröstcancer. Emellertid krävs det en erfaren radiolog för att tyda en mammografibild, och bristen på radiologer reducerar sjukhusets operativa effektivitet. Dessutom, att mammografin kommer från olika anläggningar ökar svårigheten att diagnostisera. Vårt arbete föreslår ett djuplärande segmenteringssystem som kan anpassa sig till mammografi från olika anläggningar och lokalisera tumörens position. Vi tränar och testar vår metod på två offentliga mammografidataset och gör flera experiment för att hitta den bästa parameterinställningen för vårt system. Testsegmenteringsresultaten tyder på att vårt system kan fungera som ett hjälpdiagnosverktyg vid diagnos av bröstcancer och förbättra diagnostisk noggrannhet och effektivitet.
Mcmahon, Sean. "Knowledge Management: Style, Structure, and the Latent Potential of Documented Knowledge". Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5668.
Pełny tekst źródłaPh.D.
Doctorate
Dean's Office, Business Administration
Business Administration
Business Administration; Management
Kola, Ramya Sree. "Generation of synthetic plant images using deep learning architecture". Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18450.
Pełny tekst źródłaLindblad, Maria. "A Comparative Study of the Quality between Formality Style Transfer of Sentences in Swedish and English, leveraging the BERT model". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299932.
Pełny tekst źródłaÖverföring av formalitetsstil syftar på uppgiften att automatiskt omvandla ett stycke text från en nivå av formalitet till en annan. Tidigare forskning har undersökt olika metoder för att utföra uppgiften på engelsk text men vid tiden för detta projekt fanns det enligt författarens vetskap inga tidigare studier som analyserat kvaliteten för överföring av formalitetsstil på svensk text. Syftet med detta arbete var att undersöka hur en modell tränad för överföring av formalitetsstil på svensk text presterar. Detta gjordes genom att jämföra kvaliteten på en modell tränad för överföring av formalitetsstil på svensk text, med en motsvarande modell tränad på engelsk text. Båda modellerna implementerades som kodnings-avkodningsmodeller, vars vikter initierats med hjälp av två befintliga Bidirectional Encoder Representations from Transformers (BERT)-modeller, förtränade på svensk respektive engelsk text. De två modellerna finjusterades för omvandling både från informell stil till formell och från formell stil till informell. Under finjusteringen användes en svensk och en engelsk version av korpusen Grammarly’s Yahoo Answers Formality Corpus (GYAFC). Den svenska versionen av GYAFC skapades genom automatisk maskinöversättning av den ursprungliga engelska versionen. Den svenska korpusen utvärderades sedan med hjälp av de tre kriterierna betydelse-bevarande, formalitets-bevarande och flödes-bevarande. Resultaten från studien indikerade att den svenska modellen hade kapaciteten att matcha kvaliteten på den engelska modellen men hölls tillbaka av den svenska korpusens sämre kvalitet. Studien underströk också behovet av uppgiftsspecifika korpusar på svenska.
Hutchinson, Ucrecia Faith. "Biochemical processes for Balsamic-styled vinegar engineering". Thesis, Cape Peninsula University of Technology, 2019. http://hdl.handle.net/20.500.11838/3048.
Pełny tekst źródłaThe South African wine industry is constantly facing several challenges which affect the quality of wine, the local/global demand and consequently the revenue generated. These challenges include the ongoing drought, bush fires, climate change and several liquor amendment bills aimed at reducing alcohol consumption and alcohol outlets in South Africa. It is therefore critical for the wine industry to expand and find alternative ways in which sub-standard or surplus wine grapes can be used to prevent income losses and increase employment opportunities. Traditional Balsamic Vinegar (TBV) is a geographically and legislative protected product produced only in a small region in Italy. However, the methodology can be used to produce similar vinegars in other regions. Balsamic-styled vinegar (BSV), as defined in this thesis, is a vinegar produced by partially following the methods of TBV while applying process augmentation techniques. Balsamic-styled vinegar is proposed to be a suitable product of sub-standard quality or surplus wine grapes in South Africa. However, the production of BSV necessitates the use of cooked (high sugar) grape must which is a less favourable environment to the microorganisms used during fermentation. Factors that negatively affect the survival of the microorganisms include low water activity due to the cooking, high osmotic pressure and high acidity. To counteract these effects, methods to improve the survival of the non-Saccharomyces yeasts and acetic acid bacteria used are essential. The primary aim of this study was to investigate several BSV process augmentation techniques such as, aeration, agitation, cell immobilization, immobilized cell reusability and oxygen mass transfer kinetics in order to improve the performance of the microbial consortium used during BSV production. The work for this study was divided into four (4) phases. For all the phases a microbial consortium consisting of non-Saccharomyces yeasts (n=5) and acetic acid bacteria (n=5) was used. Inoculation of the yeast and bacteria occurred simultaneously. The 1st phase of the study entailed evaluating the effect of cells immobilized by gel entrapment in Ca-alginate beads alongside with free-floating cells (FFC) during the production of BSV. Two Ca-alginate bead sizes were tested i.e. small (4.5 mm) and large (8.5 mm) beads to evaluate the effects of surface area or bead size on the overall acetification rates. Ca-alginate beads and FFC fermentations were also evaluated under static and agitated (135 rpm) conditions. The 2nd phase of the study involved studying the cell adsorption technique for cell immobilization which was carried-out using corncobs (CC) and oak wood chips (OWC), while comparing to FFC fermentations. At this phase of the study, other vinegar bioreactor parameters such as agitation and aeration were studied in contrast to static fermentations. One agitation setting (135 rpm) and two aeration settings were tested i.e. high (0.3 vvm min−1) and low (0.15 vvm min−1) aeration conditions. Furthermore, to assess the variations in cell adsorption capabilities among individual yeast and AAB cells, the quantification of cells adsorbed on CC and OWC prior- and post-fermentation was conducted using the dry cell weight method. The 3rd phase of the study entailed evaluating the reusability abilities of all the matrices (small Ca-alginate beads, CC and OWC) for successive fermentations. The immobilized cells were evaluated for reusability on two cycles of fermentation under static conditions. Furthermore, the matrices used for cell immobilization were further analysed for structure integrity by scanning electron microscopy (SEM) before and after the 1st cycle of fermentations. The 3rd phase of the study also involved the sensorial (aroma and taste) evaluations of the BSV’s obtained from the 1st cycle of fermentation in order to understand the sensorial effects of the Ca-alginate beads, CC and OWC on the final BSV. The 4th phase of the study investigated oxygen mass transfer kinetics during non-aerated and aerated BSV fermentation. The dynamic method was used to generate several dissolved oxygen profiles at different stages of the fermentation. Consequently, the data obtained from the dynamic method was used to compute several oxygen mass transfer parameters, these include oxygen uptake rate ( 𝑟𝑟𝑂𝑂2 ), the stoichiometric coefficient of oxygen consumption vs acid yield (𝑌𝑌𝑂𝑂/𝐴𝐴), the oxygen transfer rate (𝑁𝑁𝑂𝑂2 ), and the volumetric mass transfer coefficients (𝐾𝐾𝐿𝐿𝑎𝑎). During all the phases of the study samples were extracted on weekly intervals to evaluate pH, sugar, salinity, alcohol and total acidity using several analytical instruments. The 4th phase of the study involved additional analytical tools, i.e. an oxygen µsensor to evaluate dissolved oxygen and the ‘Speedy breedy’ to measure the respiratory activity of the microbial consortium used during fermentation. The data obtained from the 1st phase of the study demonstrated that smaller Ca-alginate beads resulted in higher (4.0 g L-1 day−1) acetification rates compared to larger (3.0 g L-1 day−1) beads, while freely suspended cells resulted in the lowest (0.6 g L-1 day−1) acetification rates. The results showed that the surface area of the beads had a substantial impact on the acetification rates when gel entrapped cells were used for BSV fermentation. The 2nd phase results showed high acetification rates (2.7 g L-1 day−1) for cells immobilized on CC in contrast to cells immobilized on OWC and FFC, which resulted in similar and lower acetification rates. Agitated fermentations were unsuccessful for all the treatments (CC, OWC and FFC) studied. Agitation was therefore assumed to have promoted cell shear stress causing insufficient acetification during fermentations. Low aerated fermentations resulted in better acetification rates between 1.45–1.56 g L-1 day−1 for CC, OWC and FFC. At a higher aeration setting, only free-floating cells were able to complete fermentations with an acetification rate of 1.2 g L-1 day−1. Furthermore, the adsorption competence data showed successful adsorption on CC and OWC for both yeasts and AAB with variations in adsorption efficiencies, whereby OWC displayed a lower cell adsorption capability compared to CC. On the other hand, OWC were less efficient adsorbents due to their smooth surface, while the rough surface and porosity of CC led to improved adsorption and, therefore, enhanced acetification rates. The 3rd phase results showed a substantial decline in acetification rates on the 2nd cycle of fermentations when cells immobilized on CC and OWC were reused. While cells entrapped in Ca-alginate beads were able to complete the 2nd cycle of fermentations at reduced acetification rates compared to the 1st cycle of fermentations. The sensory results showed positive ratings for BSV’s produced using cells immobilized in Ca-alginate beads and CC. However, BSV’s produced using OWC treatments were neither ‘liked nor disliked’ by the judges. The SEM imaging results further showed a substantial loss of structural integrity for Ca-alginate beads after the 1st cycle fermentations, with minor changes in structural integrity of CC being observed after the 1st cycle fermentations. OWC displayed the same morphological structure before and after the 1st cycle fermentations which was attributed to their robustness. Although Ca-alginate beads showed a loss in structural integrity, it was still assumed that Ca-alginate beads provided better protection against the harsh environmental conditions in contrast to CC and OWC adsorbents due to the acetification rates obtained on both cycles. The 4th phase data obtained from the computations showed that non-aerated fermentations had a higher 𝑌𝑌𝑂𝑂/𝐴𝐴, 𝑟𝑟𝑂𝑂2 , 𝑁𝑁𝑂𝑂2 and a higher 𝐾𝐾𝐿𝐿𝑎𝑎 . It was clear that aerated fermentations had a lower aeration capacity due to an inappropriate aeration system design and an inappropriate fermentor. Consequently, aeration led to several detrimental biochemical changes in the fermentation medium thus affecting 𝐾𝐾𝐿𝐿𝑎𝑎 and several oxygen mass transfer parameters which serve as a driving force. Overall, it was concluded that the best method for BSV production is the use of cells entrapped in small alginate beads or cells adsorbed on CC under static and non-aerated fermentations. This conclusion was based on several factors such as cell affinity/cell protection, acetification rates, fermentation period and sensorial contributions. However, cells entrapped in Ca-alginate beads had the highest acetification rates. The oxygen mass transfer computations demonstrated a high 𝐾𝐾𝐿𝐿𝑎𝑎 when Ca-alginate beads were used under static-non-aerated conditions compared to fermentations treated with CC. Therefore, a fermentor with a high aeration capacity needs to be designed to best suit the two BSV production systems (Ca-alginate beads and CC). It is also crucial to develop methods which can increase the robustness of Ca-alginate beads in order to improve cell retention and reduce the loss of structural integrity for subsequent cycles of fermentation. Studies to define parameters used for upscaling the BSV production process for large scale productions are also crucial.
Nikitina, J. "LEGAL STYLE MARKERS ANO THEIR TRANSLATION IN WRITTEN PLEADINGS BEFORE THE EUROPEAN COURT OF HUMAN RIGHTS". Doctoral thesis, Università degli Studi di Milano, 2017. http://hdl.handle.net/2434/525762.
Pełny tekst źródłaMacPherson, Randall T. "The relationship among content knowledge, technical experience, cognitive styles, critical thinking skills, problem solving styles, and near transfer trouble shooting technological problem solving skills of maintenance technicians /". free to MU campus, to others for purchase, 1997. http://wwwlib.umi.com/cr/mo/fullcit?p9841170.
Pełny tekst źródłaSvenson, Frithiof, Chaudhuri Himadri Roy, Arindam Das i Markus Launer. "Decision-making style and trusting stance at the workplace: A socio-cultural approach". TUDpress, 2020. https://tud.qucosa.de/id/qucosa%3A73713.
Pełny tekst źródłaDennison, Taryn. "Attachment style and the transfer of attachment functions from parents to peers in relation to the subjective wellbeing of first-year undergraduate students". Thesis, University of Surrey, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.599577.
Pełny tekst źródłaMaddipudi, Koushik. "Efficient Architectures for Retrieving Mixed Data with Rest Architecture Style and HTML5 Support". TopSCHOLAR®, 2013. http://digitalcommons.wku.edu/theses/1251.
Pełny tekst źródłaLakew, Surafel Melaku. "Multilingual Neural Machine Translation for Low Resource Languages". Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/257906.
Pełny tekst źródłaLakew, Surafel Melaku. "Multilingual Neural Machine Translation for Low Resource Languages". Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/257906.
Pełny tekst źródłaKaprálková, Michaela. "Football coaches’ awareness and implementation of team dynamics". Thesis, Malmö universitet, Fakulteten för lärande och samhälle (LS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-31399.
Pełny tekst źródłaDe, Biase Alessia. "Generative Adversarial Networks to enhance decision support in digital pathology". Thesis, Linköpings universitet, Statistik och maskininlärning, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158486.
Pełny tekst źródłaZabaleta, Razquin Itziar. "Image processing algorithms as artistic tools in digital cinema". Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/672840.
Pełny tekst źródłaLa industria del cine ha experimentado un cambio radical en las últimas décadas: la transición de su soporte fílmico a la tecnología del cine digital. Como consecuencia, han aparecido algunos desafíos técnicos, pero, al mismo tiempo, infinitas nuevas posibilidades se han abierto con la utilización de este nuevo medio. En esta tesis, se proponen diferentes herramientas que pueden ser útiles en el contexto del cine. Primero, se ha desarrollado una herramienta para aplicar \textit{color grading} de manera automática. Es un método basado en estadísticas de imágenes, que transfiere el estilo de una imagen de referencia a metraje sin procesar. Las ventajas del método son su sencillez y bajo coste computacional, que lo hacen adecuado para ser implementado a tiempo real, permitiendo que se pueda experimentar con diferentes estilos y 'looks', directamente on-set. En segundo lugar, se ha creado un método para mejorar imágenes mediante la adición de textura. En cine, el grano de película es la textura más utilizada, ya sea porque la grabación se hace directamente sobre película, o porque ha sido añadido a posteriori en contenido grabado en formato digital. En esta tesis se propone un método de 'ruido retiniano' inspirado en procesos del sistema visual, que produce resultados naturales y visualmente agradables. El modelo cuenta con parámetros que permiten variar ampliamente la apariencia de la textura, y por tanto puede ser utilizado como una herramienta artística para cinematografía. Además, debido al fenómeno de enmascaramiento del sistema visual, al añadir esta textura se produce una mejora en la calidad percibida de las imágenes, lo que supone ahorros en ancho de banda y tasa de bits. El método ha sido validado mediante experimentos psicofísicos en los cuales ha sido elegido por encima de otros métodos que emulan grano de película, métodos procedentes de academia como de industria. Finalmente, se describe una métrica de calidad de imágenes, basada en fenómenos fisiológicos, con aplicaciones tanto en el campo del procesamiento de imágenes, como más concretamente en el contexto del cine y la transmisión de imágenes: codificación de vídeo, compresión de imágenes, etc. Se propone la optimización de los parámetros del modelo, de manera que sea competitivo con otros métodos del estado del arte . Una ventaja de este método es su reducido número de parámetros comparado con algunos métodos basados en deep learning, que cuentan con un número varios órdenes de magnitud mayor.
Chen, Yi-Ting, i 陳奕廷. "Unsupervised Text Style Transfer and its Application to Wuxia-Modern Style Transfer". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/687nz8.
Pełny tekst źródła國立中興大學
資訊科學與工程學系
106
In recent years, artificial intelligence has flourished, and deep learning has been widely used in various research fields and has made significant breakthroughs in various applications such as translation, face recognition, voice assistant, and spam detection. Style transfer is an important research project for deep learning and has achieved remarkable results in the field of computer vision. However, it is relatively immature in the field of natural language processing. One of the main reasons is that the existing text style transfer methods require a large number of parallel corpora for style transfer. In this paper, in order to solve the problem of lack of parallel corpus, we propose a two-stage unsupervised training framework to achieve unsupervised text style transfer tasks. The core idea is to (1) separately train two models from the source style text and the target style text to capture the semantics behind the texts, and (2) combine the trained model to perform style transfer. By doing so, we need only non-parallel text corpus to perform style transfer. In this thesis, we use the modern and wuxia styles as examples to validate the proposed framework and report research findings.
Wu, Deng-Jyun, i 吳登鈞. "Neural style transfer algorithm analysis and portrait style application". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/q46v7n.
Pełny tekst źródła國立臺灣大學
電信工程學研究所
107
This paper mainly discusses the comparison between style conversion algorithms. Through the feature extraction on the image through the convolution neural network, the image style and structure information can be captured and reconstructed. VGG-NET can help the computer vision field extract a lot of features in the image, and through the operation of the feature to complete the style conversion work, can achieve different style effects through the setting of many parameters, but the distortion in the portrait details is the main problem we found. This greatly limits its application. By gradually processing different intervals, we have completed the situation where the style conversion is prone to distortion of the style avatar. The first part of the paper focuses on the idea that Getys proposed in 2015. The algorithm for style conversion through the curly network, using the different curl layer features in VGG-NET for reconstruction and Gram-matrix design, has achieved good results, and can know the image characteristics of the curl network observed at different levels. Differences, according to different levels of image features, can achieve different conversion effects. This also makes us know that the curly network can get rich and rich features, and also proposes new imagination for style conversion. However, the computational complexity limits its application, so many algorithms are proposed to improve the execution speed. The second part of the thesis mainly uses the coding and decoder design, Whiting & Coloring, feature mapping design, and also uses the different curl layer features in VGG-NET to reconstruct, and obtain different conversion effects. Improve the efficiency of style conversion while completing the operability of any style conversion. The third part is that we find that the portrait style conversion is easy to be distorted, so we cut the image so that the neural network can capture the features of the key image more accurately, so that the style conversion can be used more easily in everyday devices. And operation. Therefore, our proposed algorithm utilizes the current universal style transfer via feature transforms, uses image pre-processing, performs segmentation, and individually completes the reconstructed image after style conversion, so that the style conversion algorithm can also be used in portrait photos.
Hsu, Jhih-Hong, i 徐誌鴻. "Style Transfer of 3D Scenes". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/u9cfn9.
Pełny tekst źródła國立臺灣大學
資訊工程學研究所
107
Style transfer is a popular topic in computer science. There are lots of works try to turn an image into the desired style. In addition, there are some popular commercial products. With the growth of head-mounted display, the demand of 3D multimedia content is increasing, such as games and virtual reality applications. Due to creating a scene in a particular style is time-consuming, we take style transfer into the design process. However, most of these works focus on 2D images. We modify these methods to reach our goal. In this thesis, we propose a method to transfer the 3D scene style by a specific painting. In the preprocessing stage, we transfer the style of albedo maps. We also generate normal maps and displacement maps according to the transferred albedo maps. In the runtime, we tune the lighting effect and add silhouette to improve the transferred result. Finally, we demonstrate the result of transferring objects by different areas of the style image and discuss the runtime performance.
Agria, João Manuel Pedro. "deepSTAIl: Style Transfer for Artificial Illustrations". Master's thesis, 2020. http://hdl.handle.net/10316/92590.
Pełny tekst źródłaTransferência de estilo com redes neuronais é a versão mais recente do ramo de interpretações artísticas baseadas em imagens. Historicamente, algoritmos de estilização para interpretações não-realistas foram desenvolvidos especificamente em torno de certas primitivas. Por exemplo, uma interpretação baseada em pinceladas colocava pinceladas virtuais numa imagem, mas era desenvolvida cuidadosamente para um estilo particular de pincelada e revelava-se incapaz de simular um estilo arbitrário. Esta limitação inerente de flexibilidade, estilo e diversidade que alguns algoritmos de interpretações artísticas baseadas em imagens tinham era equilibrada pela sua capacidade de representar fielmente os estilos artísticos para os quais eram criados. A procura por novos algoritmos que respondessem a estas limitações resulta no aparecimento da transferência de estilo com redes neuronais. A introdução de redes neuronais convolucionais causou uma mudança profunda nesta velha área de investigação, e atraiu a atenção de círculos académicos e industriais.Esta dissertação tem como objectivo ultrapassar as limitações computacionais do algoritmo clássico de transferência de estilo treinando uma rede geradora para realizar a mesma tarefa centenas de vezes mais depressa. A continuação lógica de uma transferência de estilo mais rápida, que é a transferência de estilo em vídeo, é um tópico que será explorado nesta dissertação devido às suas variadas aplicações em cenários de realidade aumentada e de realidade virtual, e na indústria de animação. Para solucionar o problema de processamento de vídeo com redes neuronais, duas alternativas são consideradas: utilizar métodos do ramo de visão por computador para guiar o treino da rede, ou alterar a arquitectura da rede para aferir informação temporal e espacial ao mesmo tempo.
Neural style transfer is the most recent facet of image-based artistic rendering. Historically, stylization algorithms for non-photorealistic rendering were designed specifically around certain primitives. For example, stroke based rendering placed virtual strokes on an image, but was carefully designed for only one particular style of stroke and not capable of simulating an arbitrary style. This inherent limitation on flexibility, style and diversity some IB-AR algorithms had was balanced by their capability of faithfully depicting those certain prescribed styles. The demand for novel algorithms to address these limitations gives birth to the field of NST. The introduction of convolutional neural networks caused a paradigm shift in this long standing area of research, and attracted the attention of both academic and industrial circles. This dissertation has the goal of enabling classical neural style transfer to overcome its computational limitations by training a generative network to perform the same task hundreds of times faster. The logical continuation of faster neural style transfer, video style transfer, is a topic that will be explored due to its many possible applications in augmented reality and virtual reality scenarios, and in the animation industry. To solve the task of video processing with neural networks, two alternatives are considered: using computer vision methods to guide a network's training, or changing a networks architecture to take into account spatial and temporal information at the same time.
Shi, Shia-Sheng, i 許珈勝. "3D Human head model style transfer". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/3wc7ew.
Pełny tekst źródła國立交通大學
多媒體工程研究所
105
In this thesis, we propose a multi-level shape style transfer method. We first apply mesh parameterization to project input patches to U-V domain meshes with free-boundary, and then perform remeshing to re-sample U-V domain meshes to image space. Each vertex in this mesh stores its original 3D position. We split shape style into 2 degrees, the first is called “base shape”, and the other is called “detail components”. Both degrees of style are computed through Fourier transform in image space. We also prevent aliasing problem during remeshing to image space by computing the detail components through subtraction of original patch by reconstructed base shape in 3D space. The final transfer process allows user adding, subtracting, or blending selected styles and transfer to selected source model.
Wu, Kuo-Chen, i 吳國禎. "Steganography Using Texture Synthesis and Style Transfer". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/36798755332409207947.
Pełny tekst źródła國立中興大學
資訊科學與工程學系
103
In this study we present two steganographic algorithms using texture synthesis and style transfer. The first algorithm we propose is a novel approach for steganography using a reversible texture synthesis. A texture synthesis process re-samples a smaller texture image which synthesizes a new texture image with a similar local appearance and arbitrary size. We weave the texture synthesis process into steganography to conceal secret messages. In contrast to using an existing cover image to hide messages, our algorithm conceals the source texture image and embeds secret messages through the process of texture synthesis. This allows us to extract secret messages and the source texture from a stego synthetic texture. Our approach offers three distinct advantages. First, our scheme offers the embedding capacity that is proportional to the size of the stego texture image. Second, a steganalytic algorithm is not likely to defeat our steganographic approach. Third, the reversible capability inherited from our scheme provides functionality which allows recovery of the source texture. Experimental results have verified that our proposed algorithm can provide various numbers of embedding capacities, produce a visually plausible texture images, and recover the source texture. The second algorithm we introduce is a steganographic algorithm that using constrained texture synthesis and style transfer. Given an original source texture and a target image or vector field, our scheme can produce a new stego synthetic image concealing secret messages. Our approach is the first steganographic method that uses the constrained texture synthesis and style transfer scheme to convey secret message. We recommend using the combinatorial number system to provide an efficient data embedding solution. Experimental results verified that our proposed algorithm can provide various numbers of embedding capacities, produce a visually plausible texture images, and recover the source texture. To best of our knowledge, these two algorithms are novel and are the first attempt of using the texture synthesis and style transfer techniques to achieve steganography.
Teixeira, Inês Filipa Nunes. "Artistic Style Transfer for Textured 3D Models". Master's thesis, 2017. https://repositorio-aberto.up.pt/handle/10216/106653.
Pełny tekst źródłaChang, Wei-Cheng, i 張為誠. "Deep Learning Based Style Transfer for Videos". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/fy564u.
Pełny tekst źródła國立交通大學
多媒體工程研究所
107
Neural style transfer is usually suitable for use in abstract styles. When used in styles such as Japanese animation whose foreground is more complex than their background, the results are often not as good as expected. We design a method to automatically transfer the style for video with this type of style. We combine semantic segmentation and spatial control to transfer the specified style to the specified area. By designing the initial image and the loss function, we fixed the distortion of the face and the incomplete style transfer. We propose a method to provide users with the ability to adjust the feature weights of different regions to maintain the artistic conception of the target style, we also combine the optical flow to ensure the coherence from frame to frame in the video.
Teixeira, Inês Filipa Nunes. "Artistic Style Transfer for Textured 3D Models". Dissertação, 2017. https://repositorio-aberto.up.pt/handle/10216/106653.
Pełny tekst źródłaSAGAR. "ARTISTIC STYLE TRANSFER USING CONVOLUTIONAL NEURAL NETWORKS". Thesis, 2019. http://dspace.dtu.ac.in:8080/jspui/handle/repository/16763.
Pełny tekst źródłaWang, Shen-Chi, i 王聖棋. "Paint Style Transfer System with the Artistic Database". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/7zz79n.
Pełny tekst źródła國立東華大學
資訊工程學系
95
Digital painting synthesizes an output image with paint styles of example images along with the input source image. However, the synthesis procedure always requires the user intervention in selecting patches from example images that best describe its paint styles. The thesis presents a systematic system framework to synthesize example-based rendering images requires no user intervention in the synthesis procedure. The artistic database is been comprised in this work, and the user can synthesize an image according to the paint styles of different well known artists. We use the mean shift image segmentation procedure and the texture re-synthesis method to construct our artistic database, and then find the correspondence between example textures and the mean-shifting areas of the input source image, and then synthesize the output images using the patch-based sampling approach. The main contribution of this thesis is the systematic paint style transfer system for synthesizing a new image without requiring any user intervention. The artistic database is composed of re-synthesized mean-shifting example images of different artists, which are adopted as learning examples of the paint style of different well known artists during the synthesis procedure, and the system will synthesize a new image with the paint style of the user selected artist from the database automatically.
Tu, Ning, i 杜寧. "Video Cloning for Paintings via Artistic Style Transfer". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/6am2q7.
Pełny tekst źródła國立中正大學
資訊工程研究所
104
In the past, visual arts usually represented the static art like paintings, photography and sculptures. In recent years, many museums, artwork galleries, and even art exhibitions demonstrated dynamic artworks for visitors to relish. The most famous dynamic artwork is “The moving painting of Along the River During the Qingming Festival”. Nevertheless, it took two years to complete this work. They had to plan each action for every character at first, then drew each video frame by animators. Finally, it could achieve seamless stitching by using lots of projectors to render scene on the screen. In our research, we propose a method for generating animated paintings. It only needs millions of videos on a network of existing databases and requires users to perform some simple auxiliary operations to achieve the effect of animation synthesis. First, our system lets users select an object with the same class from the first video frame. We then employ random forests as learning algorithm to retrieve from a video the object which users want to insert into an artwork. Second, we utilize style transferring, which enables the video frames to be consistent with the style of painting. At last, we use the seamless image cloning algorithm to yield seamless synthesizing result. Our approach allows different users to synthesize animating paintings up to their own preferences. The resulting work not only maintains the original author's painting style, but also generates a variety of artistic conception for people to enjoy.
Santos, André Loureiro. "Creating Stylised Geographic Maps with Neural Style Transfer". Master's thesis, 2021. http://hdl.handle.net/10316/96078.
Pełny tekst źródłaApesar das comuns aplicações para navegação, os mapas geográficos possuem uma inegável qualidade estética independente de fins utilitários que é, por isso, capaz de os apropriar a contextos estéticos, decorativos e focados no design. Os dados e ferramentas emergentes da constante evolução tecnológica popularizam e facilitam a utilização de mapas focados na estética e, desse modo, surgem novas oportunidades para exploração visual no contexto de mapas geográficos. Técnicas de design computacional e generativo facilitam e promovem a criação de artefactos em maiores quantidades e mais customizáveis, experimentais, e diversos, exacerbando assim a já mencionada oportunidade para exploração visual de mapas geográficos. Esta dissertação foca-se na utilização de técnicas de design computacional como ferramentas para estilização e exploração visual de mapas geográficos focados na estética. Nesse contexto, o projeto toma como objetivo o desenvolvimento de um sistema web capaz de gerar mapas estilizados de qualquer região geográfica do mundo. O sistema utiliza dados geográficos de acesso público para desenhar mapas de qualquer região e neural style transfer para os estilizar, permitindo assim a qualquer utilizador gerar mapas estilizados de qualquer lugar, baseados no estilo visual de qualquer imagem.Em primeiro lugar, é introduzida uma framework computacional que engloba os componentes e funcionalidades que permitem gerar mapas estilizados a partir de um conjunto de parâmetros. Por detrás desta framework está uma abordagem baseada em tiles que foi conceptualizada e implementada de forma a permitir adaptar qualquer implementação de neural style transfer para fins de estilização de mapas. De seguida, é apresentado o sistema web público que combina a framework de estilização com uma interface gráfica intuitiva de forma a permitir qualquer pessoa aceder e explorar o método de estilização de mapas aqui desenvolvido. A aplicabilidade dos mapas gerados é explorada e ilustrada, levando-nos a concluir que estes são uma forma viável de criar conexões a áreas geográficas de forma esteticamente apelativa, algo que pode acontecer através da sua aplicação em contextos estritamente decorativos ou mais significativos e focados no design.
Despite regular navigational applications, geographic maps possess an undeniable aesthetic quality independent from utilitarian purposes and thus able to propel them into decorative, aesthetic and design-focused contexts. The data and tools provided by the advancement of technology popularise and expedite the usage of aesthetic-focused maps and thereby open new opportunities for the visual exploration of geographic maps. Generative and computational design techniques aid and promote the creation of more numerous, customisable, diverse, and experimental outputs, thus exacerbating the aforementioned opportunity for visual exploration in the context of geographic maps.This dissertation focuses on the application of computational design techniques as a tool for map stylisation and the exploration of aesthetic-focused geographic maps. Within that context, the project centres on the development of a web-based system able to generate stylised maps of any region in the world. The system uses open-access geographic data to render maps of any selected area and a neural style transfer technique to stylise them, thus allowing users to generate stylised maps of anywhere, based on the visual style of any image file. We first introduce a stand-alone computational framework that comprises the components and functionalities tasked with generating stylised maps based on a set of parameters. At the core of this framework is a tile-based approach we conceptualised and implemented to successfully adapt any implementation of neural style transfer for the purposes of map stylisation. Then, we present a public web-based system that combines the stand-alone framework with an intuitive user interface, thereby enabling anyone to access and explore our approach to map stylisation. Finally, the real-world applicability of the generated stylised maps is explored and illustrated, leading us to conclude that they are a viable way to aesthetically establish connections to geographic places, which can happen through their application in purely decorative or more meaningful design contexts.